00:00:00.001 Started by upstream project "autotest-per-patch" build number 131843 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.085 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.086 The recommended git tool is: git 00:00:00.086 using credential 00000000-0000-0000-0000-000000000002 00:00:00.087 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.139 Fetching changes from the remote Git repository 00:00:00.143 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.206 Using shallow fetch with depth 1 00:00:00.206 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.206 > git --version # timeout=10 00:00:00.260 > git --version # 'git version 2.39.2' 00:00:00.260 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.296 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.296 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.918 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.931 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.946 Checking out Revision 44e7d6069a399ee2647233b387d68a938882e7b7 (FETCH_HEAD) 00:00:06.946 > git config core.sparsecheckout # timeout=10 00:00:06.957 > git read-tree -mu HEAD # timeout=10 00:00:06.973 > git checkout -f 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=5 00:00:06.992 Commit message: "scripts/bmc: Rework Get NIC Info cmd parser" 00:00:06.992 > git rev-list --no-walk 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=10 00:00:07.074 [Pipeline] Start of Pipeline 00:00:07.090 [Pipeline] library 00:00:07.091 Loading library shm_lib@master 00:00:07.092 Library shm_lib@master is cached. Copying from home. 00:00:07.110 [Pipeline] node 00:00:07.148 Running on GP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.150 [Pipeline] { 00:00:07.162 [Pipeline] catchError 00:00:07.163 [Pipeline] { 00:00:07.178 [Pipeline] wrap 00:00:07.189 [Pipeline] { 00:00:07.200 [Pipeline] stage 00:00:07.202 [Pipeline] { (Prologue) 00:00:07.433 [Pipeline] sh 00:00:07.711 + logger -p user.info -t JENKINS-CI 00:00:07.726 [Pipeline] echo 00:00:07.727 Node: GP8 00:00:07.735 [Pipeline] sh 00:00:08.027 [Pipeline] setCustomBuildProperty 00:00:08.036 [Pipeline] echo 00:00:08.037 Cleanup processes 00:00:08.041 [Pipeline] sh 00:00:08.320 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.320 2951625 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.334 [Pipeline] sh 00:00:08.617 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.617 ++ grep -v 'sudo pgrep' 00:00:08.617 ++ awk '{print $1}' 00:00:08.617 + sudo kill -9 00:00:08.617 + true 00:00:08.631 [Pipeline] cleanWs 00:00:08.642 [WS-CLEANUP] Deleting project workspace... 00:00:08.642 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.648 [WS-CLEANUP] done 00:00:08.652 [Pipeline] setCustomBuildProperty 00:00:08.665 [Pipeline] sh 00:00:08.942 + sudo git config --global --replace-all safe.directory '*' 00:00:09.118 [Pipeline] httpRequest 00:00:09.468 [Pipeline] echo 00:00:09.469 Sorcerer 10.211.164.101 is alive 00:00:09.476 [Pipeline] retry 00:00:09.478 [Pipeline] { 00:00:09.490 [Pipeline] httpRequest 00:00:09.494 HttpMethod: GET 00:00:09.495 URL: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:09.495 Sending request to url: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:09.496 Response Code: HTTP/1.1 200 OK 00:00:09.497 Success: Status code 200 is in the accepted range: 200,404 00:00:09.497 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:10.425 [Pipeline] } 00:00:10.440 [Pipeline] // retry 00:00:10.448 [Pipeline] sh 00:00:10.732 + tar --no-same-owner -xf jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:11.009 [Pipeline] httpRequest 00:00:11.389 [Pipeline] echo 00:00:11.391 Sorcerer 10.211.164.101 is alive 00:00:11.400 [Pipeline] retry 00:00:11.402 [Pipeline] { 00:00:11.416 [Pipeline] httpRequest 00:00:11.419 HttpMethod: GET 00:00:11.420 URL: http://10.211.164.101/packages/spdk_45379ed84341f94a6e1ec3eab4cc1f9c219d3e90.tar.gz 00:00:11.420 Sending request to url: http://10.211.164.101/packages/spdk_45379ed84341f94a6e1ec3eab4cc1f9c219d3e90.tar.gz 00:00:11.434 Response Code: HTTP/1.1 200 OK 00:00:11.434 Success: Status code 200 is in the accepted range: 200,404 00:00:11.434 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_45379ed84341f94a6e1ec3eab4cc1f9c219d3e90.tar.gz 00:00:54.765 [Pipeline] } 00:00:54.785 [Pipeline] // retry 00:00:54.794 [Pipeline] sh 00:00:55.081 + tar --no-same-owner -xf spdk_45379ed84341f94a6e1ec3eab4cc1f9c219d3e90.tar.gz 00:01:03.221 [Pipeline] sh 00:01:03.505 + git -C spdk log --oneline -n5 00:01:03.505 45379ed84 module/compress: Cleanup vol data, when claim fails 00:01:03.505 0afe95a3a bdev/nvme: use bdev_nvme linker script 00:01:03.505 1cbacb58f test/nvmf: Clarify comment about lack of support for iWARP in tests 00:01:03.505 169c3cd04 thread: set SPDK_CONFIG_MAX_NUMA_NODES to 1 if not defined 00:01:03.505 cab1decc1 thread: add NUMA node support to spdk_iobuf_put() 00:01:03.517 [Pipeline] } 00:01:03.532 [Pipeline] // stage 00:01:03.541 [Pipeline] stage 00:01:03.543 [Pipeline] { (Prepare) 00:01:03.560 [Pipeline] writeFile 00:01:03.575 [Pipeline] sh 00:01:03.856 + logger -p user.info -t JENKINS-CI 00:01:03.868 [Pipeline] sh 00:01:04.151 + logger -p user.info -t JENKINS-CI 00:01:04.162 [Pipeline] sh 00:01:04.443 + cat autorun-spdk.conf 00:01:04.443 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.443 SPDK_TEST_NVMF=1 00:01:04.443 SPDK_TEST_NVME_CLI=1 00:01:04.443 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.443 SPDK_TEST_NVMF_NICS=e810 00:01:04.443 SPDK_TEST_VFIOUSER=1 00:01:04.443 SPDK_RUN_UBSAN=1 00:01:04.443 NET_TYPE=phy 00:01:04.450 RUN_NIGHTLY=0 00:01:04.457 [Pipeline] readFile 00:01:04.488 [Pipeline] withEnv 00:01:04.490 [Pipeline] { 00:01:04.503 [Pipeline] sh 00:01:04.789 + set -ex 00:01:04.789 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:04.789 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:04.789 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.789 ++ SPDK_TEST_NVMF=1 00:01:04.789 ++ SPDK_TEST_NVME_CLI=1 00:01:04.789 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.789 ++ SPDK_TEST_NVMF_NICS=e810 00:01:04.789 ++ SPDK_TEST_VFIOUSER=1 00:01:04.789 ++ SPDK_RUN_UBSAN=1 00:01:04.789 ++ NET_TYPE=phy 00:01:04.789 ++ RUN_NIGHTLY=0 00:01:04.789 + case $SPDK_TEST_NVMF_NICS in 00:01:04.789 + DRIVERS=ice 00:01:04.789 + [[ tcp == \r\d\m\a ]] 00:01:04.789 + [[ -n ice ]] 00:01:04.789 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:04.789 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:04.789 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:04.789 rmmod: ERROR: Module irdma is not currently loaded 00:01:04.789 rmmod: ERROR: Module i40iw is not currently loaded 00:01:04.789 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:04.789 + true 00:01:04.789 + for D in $DRIVERS 00:01:04.789 + sudo modprobe ice 00:01:04.789 + exit 0 00:01:04.799 [Pipeline] } 00:01:04.815 [Pipeline] // withEnv 00:01:04.822 [Pipeline] } 00:01:04.836 [Pipeline] // stage 00:01:04.848 [Pipeline] catchError 00:01:04.850 [Pipeline] { 00:01:04.867 [Pipeline] timeout 00:01:04.867 Timeout set to expire in 1 hr 0 min 00:01:04.869 [Pipeline] { 00:01:04.884 [Pipeline] stage 00:01:04.886 [Pipeline] { (Tests) 00:01:04.901 [Pipeline] sh 00:01:05.187 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.187 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.187 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.187 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:05.187 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:05.187 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:05.187 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:05.187 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:05.187 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:05.187 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:05.187 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:05.187 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.187 + source /etc/os-release 00:01:05.187 ++ NAME='Fedora Linux' 00:01:05.187 ++ VERSION='39 (Cloud Edition)' 00:01:05.187 ++ ID=fedora 00:01:05.187 ++ VERSION_ID=39 00:01:05.187 ++ VERSION_CODENAME= 00:01:05.187 ++ PLATFORM_ID=platform:f39 00:01:05.187 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:05.187 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:05.187 ++ LOGO=fedora-logo-icon 00:01:05.187 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:05.187 ++ HOME_URL=https://fedoraproject.org/ 00:01:05.187 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:05.187 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:05.187 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:05.187 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:05.187 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:05.187 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:05.187 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:05.187 ++ SUPPORT_END=2024-11-12 00:01:05.187 ++ VARIANT='Cloud Edition' 00:01:05.187 ++ VARIANT_ID=cloud 00:01:05.187 + uname -a 00:01:05.187 Linux spdk-gp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:05.187 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:06.564 Hugepages 00:01:06.564 node hugesize free / total 00:01:06.564 node0 1048576kB 0 / 0 00:01:06.564 node0 2048kB 0 / 0 00:01:06.564 node1 1048576kB 0 / 0 00:01:06.564 node1 2048kB 0 / 0 00:01:06.564 00:01:06.564 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:06.565 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:06.565 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:06.823 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:06.823 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:06.823 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:06.823 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:06.823 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:06.823 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:06.823 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:06.823 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:06.823 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:06.823 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:06.823 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:06.823 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:06.823 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:06.823 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:06.823 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:06.824 + rm -f /tmp/spdk-ld-path 00:01:06.824 + source autorun-spdk.conf 00:01:06.824 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.824 ++ SPDK_TEST_NVMF=1 00:01:06.824 ++ SPDK_TEST_NVME_CLI=1 00:01:06.824 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:06.824 ++ SPDK_TEST_NVMF_NICS=e810 00:01:06.824 ++ SPDK_TEST_VFIOUSER=1 00:01:06.824 ++ SPDK_RUN_UBSAN=1 00:01:06.824 ++ NET_TYPE=phy 00:01:06.824 ++ RUN_NIGHTLY=0 00:01:06.824 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:06.824 + [[ -n '' ]] 00:01:06.824 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:06.824 + for M in /var/spdk/build-*-manifest.txt 00:01:06.824 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:06.824 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:06.824 + for M in /var/spdk/build-*-manifest.txt 00:01:06.824 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:06.824 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:06.824 + for M in /var/spdk/build-*-manifest.txt 00:01:06.824 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:06.824 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:06.824 ++ uname 00:01:06.824 + [[ Linux == \L\i\n\u\x ]] 00:01:06.824 + sudo dmesg -T 00:01:06.824 + sudo dmesg --clear 00:01:06.824 + dmesg_pid=2952311 00:01:06.824 + [[ Fedora Linux == FreeBSD ]] 00:01:06.824 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:06.824 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:06.824 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:06.824 + sudo dmesg -Tw 00:01:06.824 + [[ -x /usr/src/fio-static/fio ]] 00:01:06.824 + export FIO_BIN=/usr/src/fio-static/fio 00:01:06.824 + FIO_BIN=/usr/src/fio-static/fio 00:01:06.824 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:06.824 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:06.824 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:06.824 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:06.824 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:06.824 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:06.824 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:06.824 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:06.824 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:07.083 Test configuration: 00:01:07.084 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:07.084 SPDK_TEST_NVMF=1 00:01:07.084 SPDK_TEST_NVME_CLI=1 00:01:07.084 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:07.084 SPDK_TEST_NVMF_NICS=e810 00:01:07.084 SPDK_TEST_VFIOUSER=1 00:01:07.084 SPDK_RUN_UBSAN=1 00:01:07.084 NET_TYPE=phy 00:01:07.084 RUN_NIGHTLY=0 14:56:53 -- common/autotest_common.sh@1688 -- $ [[ n == y ]] 00:01:07.084 14:56:53 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:07.084 14:56:53 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:07.084 14:56:53 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:07.084 14:56:53 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:07.084 14:56:53 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:07.084 14:56:53 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:07.084 14:56:53 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:07.084 14:56:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:07.084 14:56:53 -- paths/export.sh@5 -- $ export PATH 00:01:07.084 14:56:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:07.084 14:56:53 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:07.084 14:56:53 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:07.084 14:56:53 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730123813.XXXXXX 00:01:07.084 14:56:53 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730123813.nclk5o 00:01:07.084 14:56:53 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:07.084 14:56:53 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:07.084 14:56:53 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:07.084 14:56:53 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:07.084 14:56:53 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:07.084 14:56:53 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:07.084 14:56:53 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:07.084 14:56:53 -- common/autotest_common.sh@10 -- $ set +x 00:01:07.084 14:56:53 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:07.084 14:56:53 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:07.084 14:56:53 -- pm/common@17 -- $ local monitor 00:01:07.084 14:56:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:07.084 14:56:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:07.084 14:56:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:07.084 14:56:53 -- pm/common@21 -- $ date +%s 00:01:07.084 14:56:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:07.084 14:56:53 -- pm/common@25 -- $ sleep 1 00:01:07.084 14:56:53 -- pm/common@21 -- $ date +%s 00:01:07.084 14:56:53 -- pm/common@21 -- $ date +%s 00:01:07.084 14:56:53 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730123813 00:01:07.084 14:56:53 -- pm/common@21 -- $ date +%s 00:01:07.084 14:56:53 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730123813 00:01:07.084 14:56:53 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730123813 00:01:07.084 14:56:53 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730123813 00:01:07.084 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730123813_collect-vmstat.pm.log 00:01:07.084 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730123813_collect-cpu-load.pm.log 00:01:07.084 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730123813_collect-cpu-temp.pm.log 00:01:07.084 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730123813_collect-bmc-pm.bmc.pm.log 00:01:08.019 14:56:54 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:08.019 14:56:54 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:08.019 14:56:54 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:08.019 14:56:54 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:08.019 14:56:54 -- spdk/autobuild.sh@16 -- $ date -u 00:01:08.019 Mon Oct 28 01:56:54 PM UTC 2024 00:01:08.019 14:56:54 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:08.019 v25.01-pre-121-g45379ed84 00:01:08.019 14:56:54 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:08.019 14:56:54 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:08.019 14:56:54 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:08.019 14:56:54 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:08.019 14:56:54 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:08.019 14:56:54 -- common/autotest_common.sh@10 -- $ set +x 00:01:08.019 ************************************ 00:01:08.019 START TEST ubsan 00:01:08.019 ************************************ 00:01:08.019 14:56:54 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:08.019 using ubsan 00:01:08.019 00:01:08.019 real 0m0.000s 00:01:08.019 user 0m0.000s 00:01:08.019 sys 0m0.000s 00:01:08.019 14:56:54 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:08.019 14:56:54 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:08.019 ************************************ 00:01:08.019 END TEST ubsan 00:01:08.019 ************************************ 00:01:08.019 14:56:54 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:08.019 14:56:54 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:08.019 14:56:54 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:08.019 14:56:54 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:08.019 14:56:54 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:08.019 14:56:54 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:08.019 14:56:54 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:08.019 14:56:54 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:08.020 14:56:54 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:08.278 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:08.278 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:08.538 Using 'verbs' RDMA provider 00:01:24.364 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:39.261 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:39.261 Creating mk/config.mk...done. 00:01:39.261 Creating mk/cc.flags.mk...done. 00:01:39.261 Type 'make' to build. 00:01:39.261 14:57:26 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:01:39.261 14:57:26 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:39.261 14:57:26 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:39.261 14:57:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.261 ************************************ 00:01:39.261 START TEST make 00:01:39.261 ************************************ 00:01:39.261 14:57:26 make -- common/autotest_common.sh@1125 -- $ make -j48 00:01:39.832 make[1]: Nothing to be done for 'all'. 00:01:41.752 The Meson build system 00:01:41.752 Version: 1.5.0 00:01:41.752 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:41.752 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:41.752 Build type: native build 00:01:41.752 Project name: libvfio-user 00:01:41.752 Project version: 0.0.1 00:01:41.752 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:41.752 C linker for the host machine: cc ld.bfd 2.40-14 00:01:41.752 Host machine cpu family: x86_64 00:01:41.752 Host machine cpu: x86_64 00:01:41.752 Run-time dependency threads found: YES 00:01:41.752 Library dl found: YES 00:01:41.752 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:41.752 Run-time dependency json-c found: YES 0.17 00:01:41.752 Run-time dependency cmocka found: YES 1.1.7 00:01:41.752 Program pytest-3 found: NO 00:01:41.752 Program flake8 found: NO 00:01:41.752 Program misspell-fixer found: NO 00:01:41.752 Program restructuredtext-lint found: NO 00:01:41.752 Program valgrind found: YES (/usr/bin/valgrind) 00:01:41.752 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:41.752 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:41.752 Compiler for C supports arguments -Wwrite-strings: YES 00:01:41.752 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:41.752 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:41.752 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:41.752 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:41.752 Build targets in project: 8 00:01:41.752 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:41.752 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:41.752 00:01:41.752 libvfio-user 0.0.1 00:01:41.752 00:01:41.752 User defined options 00:01:41.752 buildtype : debug 00:01:41.752 default_library: shared 00:01:41.752 libdir : /usr/local/lib 00:01:41.752 00:01:41.752 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:42.335 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:42.597 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:42.597 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:42.597 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:42.597 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:42.597 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:42.597 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:42.597 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:42.597 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:42.597 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:42.597 [10/37] Compiling C object samples/null.p/null.c.o 00:01:42.597 [11/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:42.597 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:42.597 [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:42.597 [14/37] Compiling C object samples/server.p/server.c.o 00:01:42.597 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:42.597 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:42.597 [17/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:42.597 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:42.597 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:42.597 [20/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:42.597 [21/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:42.597 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:42.597 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:42.859 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:42.859 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:42.859 [26/37] Compiling C object samples/client.p/client.c.o 00:01:42.859 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:42.859 [28/37] Linking target samples/client 00:01:42.859 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:42.859 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:43.119 [31/37] Linking target test/unit_tests 00:01:43.119 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:43.119 [33/37] Linking target samples/null 00:01:43.119 [34/37] Linking target samples/shadow_ioeventfd_server 00:01:43.119 [35/37] Linking target samples/server 00:01:43.119 [36/37] Linking target samples/lspci 00:01:43.119 [37/37] Linking target samples/gpio-pci-idio-16 00:01:43.119 INFO: autodetecting backend as ninja 00:01:43.119 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:43.119 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:44.061 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:44.061 ninja: no work to do. 00:01:50.731 The Meson build system 00:01:50.731 Version: 1.5.0 00:01:50.731 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:50.731 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:50.731 Build type: native build 00:01:50.731 Program cat found: YES (/usr/bin/cat) 00:01:50.731 Project name: DPDK 00:01:50.731 Project version: 24.03.0 00:01:50.731 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:50.731 C linker for the host machine: cc ld.bfd 2.40-14 00:01:50.731 Host machine cpu family: x86_64 00:01:50.731 Host machine cpu: x86_64 00:01:50.731 Message: ## Building in Developer Mode ## 00:01:50.731 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:50.731 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:50.731 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:50.731 Program python3 found: YES (/usr/bin/python3) 00:01:50.731 Program cat found: YES (/usr/bin/cat) 00:01:50.731 Compiler for C supports arguments -march=native: YES 00:01:50.731 Checking for size of "void *" : 8 00:01:50.731 Checking for size of "void *" : 8 (cached) 00:01:50.731 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:50.731 Library m found: YES 00:01:50.731 Library numa found: YES 00:01:50.731 Has header "numaif.h" : YES 00:01:50.731 Library fdt found: NO 00:01:50.731 Library execinfo found: NO 00:01:50.731 Has header "execinfo.h" : YES 00:01:50.731 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:50.731 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:50.731 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:50.731 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:50.731 Run-time dependency openssl found: YES 3.1.1 00:01:50.731 Run-time dependency libpcap found: YES 1.10.4 00:01:50.731 Has header "pcap.h" with dependency libpcap: YES 00:01:50.731 Compiler for C supports arguments -Wcast-qual: YES 00:01:50.731 Compiler for C supports arguments -Wdeprecated: YES 00:01:50.731 Compiler for C supports arguments -Wformat: YES 00:01:50.731 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:50.731 Compiler for C supports arguments -Wformat-security: NO 00:01:50.731 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:50.731 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:50.731 Compiler for C supports arguments -Wnested-externs: YES 00:01:50.731 Compiler for C supports arguments -Wold-style-definition: YES 00:01:50.731 Compiler for C supports arguments -Wpointer-arith: YES 00:01:50.731 Compiler for C supports arguments -Wsign-compare: YES 00:01:50.731 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:50.731 Compiler for C supports arguments -Wundef: YES 00:01:50.731 Compiler for C supports arguments -Wwrite-strings: YES 00:01:50.731 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:50.731 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:50.731 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:50.731 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:50.731 Program objdump found: YES (/usr/bin/objdump) 00:01:50.731 Compiler for C supports arguments -mavx512f: YES 00:01:50.731 Checking if "AVX512 checking" compiles: YES 00:01:50.731 Fetching value of define "__SSE4_2__" : 1 00:01:50.731 Fetching value of define "__AES__" : 1 00:01:50.731 Fetching value of define "__AVX__" : 1 00:01:50.731 Fetching value of define "__AVX2__" : (undefined) 00:01:50.732 Fetching value of define "__AVX512BW__" : (undefined) 00:01:50.732 Fetching value of define "__AVX512CD__" : (undefined) 00:01:50.732 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:50.732 Fetching value of define "__AVX512F__" : (undefined) 00:01:50.732 Fetching value of define "__AVX512VL__" : (undefined) 00:01:50.732 Fetching value of define "__PCLMUL__" : 1 00:01:50.732 Fetching value of define "__RDRND__" : 1 00:01:50.732 Fetching value of define "__RDSEED__" : (undefined) 00:01:50.732 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:50.732 Fetching value of define "__znver1__" : (undefined) 00:01:50.732 Fetching value of define "__znver2__" : (undefined) 00:01:50.732 Fetching value of define "__znver3__" : (undefined) 00:01:50.732 Fetching value of define "__znver4__" : (undefined) 00:01:50.732 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:50.732 Message: lib/log: Defining dependency "log" 00:01:50.732 Message: lib/kvargs: Defining dependency "kvargs" 00:01:50.732 Message: lib/telemetry: Defining dependency "telemetry" 00:01:50.732 Checking for function "getentropy" : NO 00:01:50.732 Message: lib/eal: Defining dependency "eal" 00:01:50.732 Message: lib/ring: Defining dependency "ring" 00:01:50.732 Message: lib/rcu: Defining dependency "rcu" 00:01:50.732 Message: lib/mempool: Defining dependency "mempool" 00:01:50.732 Message: lib/mbuf: Defining dependency "mbuf" 00:01:50.732 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:50.732 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:50.732 Compiler for C supports arguments -mpclmul: YES 00:01:50.732 Compiler for C supports arguments -maes: YES 00:01:50.732 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:50.732 Compiler for C supports arguments -mavx512bw: YES 00:01:50.732 Compiler for C supports arguments -mavx512dq: YES 00:01:50.732 Compiler for C supports arguments -mavx512vl: YES 00:01:50.732 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:50.732 Compiler for C supports arguments -mavx2: YES 00:01:50.732 Compiler for C supports arguments -mavx: YES 00:01:50.732 Message: lib/net: Defining dependency "net" 00:01:50.732 Message: lib/meter: Defining dependency "meter" 00:01:50.732 Message: lib/ethdev: Defining dependency "ethdev" 00:01:50.732 Message: lib/pci: Defining dependency "pci" 00:01:50.732 Message: lib/cmdline: Defining dependency "cmdline" 00:01:50.732 Message: lib/hash: Defining dependency "hash" 00:01:50.732 Message: lib/timer: Defining dependency "timer" 00:01:50.732 Message: lib/compressdev: Defining dependency "compressdev" 00:01:50.732 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:50.732 Message: lib/dmadev: Defining dependency "dmadev" 00:01:50.732 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:50.732 Message: lib/power: Defining dependency "power" 00:01:50.732 Message: lib/reorder: Defining dependency "reorder" 00:01:50.732 Message: lib/security: Defining dependency "security" 00:01:50.732 Has header "linux/userfaultfd.h" : YES 00:01:50.732 Has header "linux/vduse.h" : YES 00:01:50.732 Message: lib/vhost: Defining dependency "vhost" 00:01:50.732 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:50.732 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:50.732 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:50.732 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:50.732 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:50.732 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:50.732 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:50.732 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:50.732 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:50.732 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:50.732 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:50.732 Configuring doxy-api-html.conf using configuration 00:01:50.732 Configuring doxy-api-man.conf using configuration 00:01:50.732 Program mandb found: YES (/usr/bin/mandb) 00:01:50.732 Program sphinx-build found: NO 00:01:50.732 Configuring rte_build_config.h using configuration 00:01:50.732 Message: 00:01:50.732 ================= 00:01:50.732 Applications Enabled 00:01:50.732 ================= 00:01:50.732 00:01:50.732 apps: 00:01:50.732 00:01:50.732 00:01:50.732 Message: 00:01:50.732 ================= 00:01:50.732 Libraries Enabled 00:01:50.732 ================= 00:01:50.732 00:01:50.732 libs: 00:01:50.732 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:50.732 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:50.732 cryptodev, dmadev, power, reorder, security, vhost, 00:01:50.732 00:01:50.732 Message: 00:01:50.732 =============== 00:01:50.732 Drivers Enabled 00:01:50.732 =============== 00:01:50.732 00:01:50.732 common: 00:01:50.732 00:01:50.732 bus: 00:01:50.732 pci, vdev, 00:01:50.732 mempool: 00:01:50.732 ring, 00:01:50.732 dma: 00:01:50.732 00:01:50.732 net: 00:01:50.732 00:01:50.732 crypto: 00:01:50.732 00:01:50.732 compress: 00:01:50.732 00:01:50.732 vdpa: 00:01:50.732 00:01:50.732 00:01:50.732 Message: 00:01:50.732 ================= 00:01:50.732 Content Skipped 00:01:50.732 ================= 00:01:50.732 00:01:50.732 apps: 00:01:50.732 dumpcap: explicitly disabled via build config 00:01:50.732 graph: explicitly disabled via build config 00:01:50.732 pdump: explicitly disabled via build config 00:01:50.732 proc-info: explicitly disabled via build config 00:01:50.732 test-acl: explicitly disabled via build config 00:01:50.732 test-bbdev: explicitly disabled via build config 00:01:50.732 test-cmdline: explicitly disabled via build config 00:01:50.732 test-compress-perf: explicitly disabled via build config 00:01:50.732 test-crypto-perf: explicitly disabled via build config 00:01:50.732 test-dma-perf: explicitly disabled via build config 00:01:50.732 test-eventdev: explicitly disabled via build config 00:01:50.732 test-fib: explicitly disabled via build config 00:01:50.732 test-flow-perf: explicitly disabled via build config 00:01:50.732 test-gpudev: explicitly disabled via build config 00:01:50.732 test-mldev: explicitly disabled via build config 00:01:50.732 test-pipeline: explicitly disabled via build config 00:01:50.732 test-pmd: explicitly disabled via build config 00:01:50.732 test-regex: explicitly disabled via build config 00:01:50.732 test-sad: explicitly disabled via build config 00:01:50.732 test-security-perf: explicitly disabled via build config 00:01:50.732 00:01:50.732 libs: 00:01:50.732 argparse: explicitly disabled via build config 00:01:50.732 metrics: explicitly disabled via build config 00:01:50.732 acl: explicitly disabled via build config 00:01:50.732 bbdev: explicitly disabled via build config 00:01:50.732 bitratestats: explicitly disabled via build config 00:01:50.732 bpf: explicitly disabled via build config 00:01:50.732 cfgfile: explicitly disabled via build config 00:01:50.732 distributor: explicitly disabled via build config 00:01:50.732 efd: explicitly disabled via build config 00:01:50.732 eventdev: explicitly disabled via build config 00:01:50.732 dispatcher: explicitly disabled via build config 00:01:50.732 gpudev: explicitly disabled via build config 00:01:50.732 gro: explicitly disabled via build config 00:01:50.732 gso: explicitly disabled via build config 00:01:50.732 ip_frag: explicitly disabled via build config 00:01:50.732 jobstats: explicitly disabled via build config 00:01:50.732 latencystats: explicitly disabled via build config 00:01:50.732 lpm: explicitly disabled via build config 00:01:50.732 member: explicitly disabled via build config 00:01:50.732 pcapng: explicitly disabled via build config 00:01:50.732 rawdev: explicitly disabled via build config 00:01:50.732 regexdev: explicitly disabled via build config 00:01:50.732 mldev: explicitly disabled via build config 00:01:50.732 rib: explicitly disabled via build config 00:01:50.732 sched: explicitly disabled via build config 00:01:50.732 stack: explicitly disabled via build config 00:01:50.732 ipsec: explicitly disabled via build config 00:01:50.732 pdcp: explicitly disabled via build config 00:01:50.732 fib: explicitly disabled via build config 00:01:50.732 port: explicitly disabled via build config 00:01:50.732 pdump: explicitly disabled via build config 00:01:50.732 table: explicitly disabled via build config 00:01:50.732 pipeline: explicitly disabled via build config 00:01:50.732 graph: explicitly disabled via build config 00:01:50.732 node: explicitly disabled via build config 00:01:50.732 00:01:50.732 drivers: 00:01:50.732 common/cpt: not in enabled drivers build config 00:01:50.732 common/dpaax: not in enabled drivers build config 00:01:50.732 common/iavf: not in enabled drivers build config 00:01:50.732 common/idpf: not in enabled drivers build config 00:01:50.732 common/ionic: not in enabled drivers build config 00:01:50.732 common/mvep: not in enabled drivers build config 00:01:50.732 common/octeontx: not in enabled drivers build config 00:01:50.732 bus/auxiliary: not in enabled drivers build config 00:01:50.732 bus/cdx: not in enabled drivers build config 00:01:50.732 bus/dpaa: not in enabled drivers build config 00:01:50.732 bus/fslmc: not in enabled drivers build config 00:01:50.732 bus/ifpga: not in enabled drivers build config 00:01:50.732 bus/platform: not in enabled drivers build config 00:01:50.732 bus/uacce: not in enabled drivers build config 00:01:50.732 bus/vmbus: not in enabled drivers build config 00:01:50.732 common/cnxk: not in enabled drivers build config 00:01:50.732 common/mlx5: not in enabled drivers build config 00:01:50.732 common/nfp: not in enabled drivers build config 00:01:50.732 common/nitrox: not in enabled drivers build config 00:01:50.732 common/qat: not in enabled drivers build config 00:01:50.732 common/sfc_efx: not in enabled drivers build config 00:01:50.732 mempool/bucket: not in enabled drivers build config 00:01:50.732 mempool/cnxk: not in enabled drivers build config 00:01:50.732 mempool/dpaa: not in enabled drivers build config 00:01:50.732 mempool/dpaa2: not in enabled drivers build config 00:01:50.732 mempool/octeontx: not in enabled drivers build config 00:01:50.732 mempool/stack: not in enabled drivers build config 00:01:50.732 dma/cnxk: not in enabled drivers build config 00:01:50.732 dma/dpaa: not in enabled drivers build config 00:01:50.732 dma/dpaa2: not in enabled drivers build config 00:01:50.732 dma/hisilicon: not in enabled drivers build config 00:01:50.732 dma/idxd: not in enabled drivers build config 00:01:50.732 dma/ioat: not in enabled drivers build config 00:01:50.732 dma/skeleton: not in enabled drivers build config 00:01:50.732 net/af_packet: not in enabled drivers build config 00:01:50.732 net/af_xdp: not in enabled drivers build config 00:01:50.732 net/ark: not in enabled drivers build config 00:01:50.732 net/atlantic: not in enabled drivers build config 00:01:50.733 net/avp: not in enabled drivers build config 00:01:50.733 net/axgbe: not in enabled drivers build config 00:01:50.733 net/bnx2x: not in enabled drivers build config 00:01:50.733 net/bnxt: not in enabled drivers build config 00:01:50.733 net/bonding: not in enabled drivers build config 00:01:50.733 net/cnxk: not in enabled drivers build config 00:01:50.733 net/cpfl: not in enabled drivers build config 00:01:50.733 net/cxgbe: not in enabled drivers build config 00:01:50.733 net/dpaa: not in enabled drivers build config 00:01:50.733 net/dpaa2: not in enabled drivers build config 00:01:50.733 net/e1000: not in enabled drivers build config 00:01:50.733 net/ena: not in enabled drivers build config 00:01:50.733 net/enetc: not in enabled drivers build config 00:01:50.733 net/enetfec: not in enabled drivers build config 00:01:50.733 net/enic: not in enabled drivers build config 00:01:50.733 net/failsafe: not in enabled drivers build config 00:01:50.733 net/fm10k: not in enabled drivers build config 00:01:50.733 net/gve: not in enabled drivers build config 00:01:50.733 net/hinic: not in enabled drivers build config 00:01:50.733 net/hns3: not in enabled drivers build config 00:01:50.733 net/i40e: not in enabled drivers build config 00:01:50.733 net/iavf: not in enabled drivers build config 00:01:50.733 net/ice: not in enabled drivers build config 00:01:50.733 net/idpf: not in enabled drivers build config 00:01:50.733 net/igc: not in enabled drivers build config 00:01:50.733 net/ionic: not in enabled drivers build config 00:01:50.733 net/ipn3ke: not in enabled drivers build config 00:01:50.733 net/ixgbe: not in enabled drivers build config 00:01:50.733 net/mana: not in enabled drivers build config 00:01:50.733 net/memif: not in enabled drivers build config 00:01:50.733 net/mlx4: not in enabled drivers build config 00:01:50.733 net/mlx5: not in enabled drivers build config 00:01:50.733 net/mvneta: not in enabled drivers build config 00:01:50.733 net/mvpp2: not in enabled drivers build config 00:01:50.733 net/netvsc: not in enabled drivers build config 00:01:50.733 net/nfb: not in enabled drivers build config 00:01:50.733 net/nfp: not in enabled drivers build config 00:01:50.733 net/ngbe: not in enabled drivers build config 00:01:50.733 net/null: not in enabled drivers build config 00:01:50.733 net/octeontx: not in enabled drivers build config 00:01:50.733 net/octeon_ep: not in enabled drivers build config 00:01:50.733 net/pcap: not in enabled drivers build config 00:01:50.733 net/pfe: not in enabled drivers build config 00:01:50.733 net/qede: not in enabled drivers build config 00:01:50.733 net/ring: not in enabled drivers build config 00:01:50.733 net/sfc: not in enabled drivers build config 00:01:50.733 net/softnic: not in enabled drivers build config 00:01:50.733 net/tap: not in enabled drivers build config 00:01:50.733 net/thunderx: not in enabled drivers build config 00:01:50.733 net/txgbe: not in enabled drivers build config 00:01:50.733 net/vdev_netvsc: not in enabled drivers build config 00:01:50.733 net/vhost: not in enabled drivers build config 00:01:50.733 net/virtio: not in enabled drivers build config 00:01:50.733 net/vmxnet3: not in enabled drivers build config 00:01:50.733 raw/*: missing internal dependency, "rawdev" 00:01:50.733 crypto/armv8: not in enabled drivers build config 00:01:50.733 crypto/bcmfs: not in enabled drivers build config 00:01:50.733 crypto/caam_jr: not in enabled drivers build config 00:01:50.733 crypto/ccp: not in enabled drivers build config 00:01:50.733 crypto/cnxk: not in enabled drivers build config 00:01:50.733 crypto/dpaa_sec: not in enabled drivers build config 00:01:50.733 crypto/dpaa2_sec: not in enabled drivers build config 00:01:50.733 crypto/ipsec_mb: not in enabled drivers build config 00:01:50.733 crypto/mlx5: not in enabled drivers build config 00:01:50.733 crypto/mvsam: not in enabled drivers build config 00:01:50.733 crypto/nitrox: not in enabled drivers build config 00:01:50.733 crypto/null: not in enabled drivers build config 00:01:50.733 crypto/octeontx: not in enabled drivers build config 00:01:50.733 crypto/openssl: not in enabled drivers build config 00:01:50.733 crypto/scheduler: not in enabled drivers build config 00:01:50.733 crypto/uadk: not in enabled drivers build config 00:01:50.733 crypto/virtio: not in enabled drivers build config 00:01:50.733 compress/isal: not in enabled drivers build config 00:01:50.733 compress/mlx5: not in enabled drivers build config 00:01:50.733 compress/nitrox: not in enabled drivers build config 00:01:50.733 compress/octeontx: not in enabled drivers build config 00:01:50.733 compress/zlib: not in enabled drivers build config 00:01:50.733 regex/*: missing internal dependency, "regexdev" 00:01:50.733 ml/*: missing internal dependency, "mldev" 00:01:50.733 vdpa/ifc: not in enabled drivers build config 00:01:50.733 vdpa/mlx5: not in enabled drivers build config 00:01:50.733 vdpa/nfp: not in enabled drivers build config 00:01:50.733 vdpa/sfc: not in enabled drivers build config 00:01:50.733 event/*: missing internal dependency, "eventdev" 00:01:50.733 baseband/*: missing internal dependency, "bbdev" 00:01:50.733 gpu/*: missing internal dependency, "gpudev" 00:01:50.733 00:01:50.733 00:01:50.733 Build targets in project: 85 00:01:50.733 00:01:50.733 DPDK 24.03.0 00:01:50.733 00:01:50.733 User defined options 00:01:50.733 buildtype : debug 00:01:50.733 default_library : shared 00:01:50.733 libdir : lib 00:01:50.733 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:50.733 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:50.733 c_link_args : 00:01:50.733 cpu_instruction_set: native 00:01:50.733 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:50.733 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:50.733 enable_docs : false 00:01:50.733 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:50.733 enable_kmods : false 00:01:50.733 max_lcores : 128 00:01:50.733 tests : false 00:01:50.733 00:01:50.733 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:51.673 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:51.673 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:51.673 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:51.673 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:51.673 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:51.673 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:51.673 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:51.673 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:51.673 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:51.673 [9/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:51.673 [10/268] Linking static target lib/librte_kvargs.a 00:01:51.673 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:51.673 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:51.673 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:51.673 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:51.673 [15/268] Linking static target lib/librte_log.a 00:01:51.673 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:52.626 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.626 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:52.626 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:52.626 [20/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:52.626 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:52.626 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:52.626 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:52.626 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:52.626 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:52.626 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:52.626 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:52.626 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:52.626 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:52.626 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:52.626 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:52.626 [32/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:52.626 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:52.626 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:52.626 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:52.626 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:52.626 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:52.626 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:52.626 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:52.626 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:52.626 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:52.626 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:52.626 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:52.626 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:52.626 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:52.626 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:52.626 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:52.626 [48/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:52.626 [49/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:52.626 [50/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:52.626 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:52.626 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:52.626 [53/268] Linking static target lib/librte_telemetry.a 00:01:52.626 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:52.626 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:52.626 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:52.626 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:52.626 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:52.888 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:52.888 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:52.888 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:52.888 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:52.888 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:52.888 [64/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.888 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:52.888 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:53.148 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:53.148 [68/268] Linking target lib/librte_log.so.24.1 00:01:53.148 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:53.148 [70/268] Linking static target lib/librte_pci.a 00:01:53.149 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:53.411 [72/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:53.411 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:53.411 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:53.411 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:53.411 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:53.411 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:53.411 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:53.411 [79/268] Linking target lib/librte_kvargs.so.24.1 00:01:53.411 [80/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:53.411 [81/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:53.411 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:53.411 [83/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:53.411 [84/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:53.411 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:53.671 [86/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:53.671 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:53.671 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:53.671 [89/268] Linking static target lib/librte_ring.a 00:01:53.671 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:53.671 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:53.671 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:53.671 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:53.671 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:53.671 [95/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:53.671 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:53.671 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:53.671 [98/268] Linking static target lib/librte_meter.a 00:01:53.671 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:53.671 [100/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:53.671 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:53.671 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:53.671 [103/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.671 [104/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.671 [105/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:53.671 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:53.671 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:53.671 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:53.671 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:53.671 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:53.671 [111/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:53.671 [112/268] Linking target lib/librte_telemetry.so.24.1 00:01:53.671 [113/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:53.671 [114/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:53.671 [115/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:53.671 [116/268] Linking static target lib/librte_rcu.a 00:01:53.671 [117/268] Linking static target lib/librte_mempool.a 00:01:53.671 [118/268] Linking static target lib/librte_eal.a 00:01:53.944 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:53.944 [120/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:53.944 [121/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:53.944 [122/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:53.944 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:53.944 [124/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:53.944 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:53.944 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:53.944 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:53.944 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:53.944 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:53.944 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:53.944 [131/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:53.944 [132/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:54.205 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:54.205 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:54.205 [135/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.205 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:54.205 [137/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.205 [138/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:54.205 [139/268] Linking static target lib/librte_net.a 00:01:54.205 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:54.205 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:54.468 [142/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:54.468 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:54.468 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:54.468 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:54.468 [146/268] Linking static target lib/librte_cmdline.a 00:01:54.468 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:54.468 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:54.468 [149/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.468 [150/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:54.468 [151/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:54.729 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:54.729 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:54.729 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:54.729 [155/268] Linking static target lib/librte_timer.a 00:01:54.729 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:54.729 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:54.729 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:54.729 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:54.729 [160/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.729 [161/268] Linking static target lib/librte_dmadev.a 00:01:54.729 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:54.988 [163/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:54.988 [164/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:54.988 [165/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:54.988 [166/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:54.988 [167/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.988 [168/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:54.988 [169/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:54.988 [170/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:54.988 [171/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:54.988 [172/268] Linking static target lib/librte_power.a 00:01:54.988 [173/268] Linking static target lib/librte_compressdev.a 00:01:54.988 [174/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:54.988 [175/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.988 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:55.247 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:55.247 [178/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:55.247 [179/268] Linking static target lib/librte_hash.a 00:01:55.247 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:55.247 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:55.247 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:55.247 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:55.247 [184/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:55.247 [185/268] Linking static target lib/librte_mbuf.a 00:01:55.247 [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:55.247 [187/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.247 [188/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:55.247 [189/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:55.506 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:55.506 [191/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:55.506 [192/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.506 [193/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:55.506 [194/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:55.506 [195/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:55.506 [196/268] Linking static target lib/librte_reorder.a 00:01:55.506 [197/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:55.506 [198/268] Linking static target lib/librte_security.a 00:01:55.506 [199/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.506 [200/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:55.506 [201/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:55.506 [202/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:55.506 [203/268] Linking static target drivers/librte_bus_vdev.a 00:01:55.506 [204/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:55.506 [205/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:55.506 [206/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.764 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:55.765 [208/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:55.765 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:55.765 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:55.765 [211/268] Linking static target drivers/librte_bus_pci.a 00:01:55.765 [212/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.765 [213/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.765 [214/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:55.765 [215/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.765 [216/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:55.765 [217/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:55.765 [218/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:55.765 [219/268] Linking static target drivers/librte_mempool_ring.a 00:01:55.765 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.023 [221/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.023 [222/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:56.023 [223/268] Linking static target lib/librte_ethdev.a 00:01:56.283 [224/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:56.283 [225/268] Linking static target lib/librte_cryptodev.a 00:01:56.283 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.665 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.572 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:01.484 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.484 [230/268] Linking target lib/librte_eal.so.24.1 00:02:01.484 [231/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.484 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:01.484 [233/268] Linking target lib/librte_pci.so.24.1 00:02:01.484 [234/268] Linking target lib/librte_dmadev.so.24.1 00:02:01.484 [235/268] Linking target lib/librte_meter.so.24.1 00:02:01.484 [236/268] Linking target lib/librte_ring.so.24.1 00:02:01.484 [237/268] Linking target lib/librte_timer.so.24.1 00:02:01.484 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:01.743 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:01.743 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:01.743 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:01.743 [242/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:01.743 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:01.743 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:02.003 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:02.003 [246/268] Linking target lib/librte_mempool.so.24.1 00:02:02.003 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:02.263 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:02.263 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:02.263 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:02.522 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:02.522 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:02.522 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:02:02.522 [254/268] Linking target lib/librte_compressdev.so.24.1 00:02:02.522 [255/268] Linking target lib/librte_net.so.24.1 00:02:02.522 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:02.783 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:02.783 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:02.783 [259/268] Linking target lib/librte_security.so.24.1 00:02:02.783 [260/268] Linking target lib/librte_hash.so.24.1 00:02:03.042 [261/268] Linking target lib/librte_ethdev.so.24.1 00:02:03.042 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:03.302 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:03.302 [264/268] Linking target lib/librte_power.so.24.1 00:02:13.301 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:13.301 [266/268] Linking static target lib/librte_vhost.a 00:02:14.239 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.239 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:14.239 INFO: autodetecting backend as ninja 00:02:14.239 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:53.001 CC lib/ut/ut.o 00:02:53.001 CC lib/ut_mock/mock.o 00:02:53.001 CC lib/log/log.o 00:02:53.001 CC lib/log/log_deprecated.o 00:02:53.001 CC lib/log/log_flags.o 00:02:53.001 LIB libspdk_ut.a 00:02:53.001 LIB libspdk_ut_mock.a 00:02:53.001 SO libspdk_ut_mock.so.6.0 00:02:53.001 SO libspdk_ut.so.2.0 00:02:53.001 SYMLINK libspdk_ut_mock.so 00:02:53.001 SYMLINK libspdk_ut.so 00:02:53.001 LIB libspdk_log.a 00:02:53.001 SO libspdk_log.so.7.1 00:02:53.001 SYMLINK libspdk_log.so 00:02:53.001 CXX lib/trace_parser/trace.o 00:02:53.001 CC lib/ioat/ioat.o 00:02:53.001 CC lib/dma/dma.o 00:02:53.001 CC lib/util/base64.o 00:02:53.001 CC lib/util/bit_array.o 00:02:53.001 CC lib/util/crc16.o 00:02:53.001 CC lib/util/cpuset.o 00:02:53.001 CC lib/util/crc32.o 00:02:53.001 CC lib/util/crc32c.o 00:02:53.001 CC lib/util/crc32_ieee.o 00:02:53.001 CC lib/util/crc64.o 00:02:53.001 CC lib/util/dif.o 00:02:53.001 CC lib/util/fd.o 00:02:53.001 CC lib/util/fd_group.o 00:02:53.001 CC lib/util/file.o 00:02:53.001 CC lib/util/hexlify.o 00:02:53.001 CC lib/util/iov.o 00:02:53.001 CC lib/util/math.o 00:02:53.001 CC lib/util/net.o 00:02:53.001 CC lib/util/pipe.o 00:02:53.001 CC lib/util/string.o 00:02:53.001 CC lib/util/strerror_tls.o 00:02:53.001 CC lib/util/uuid.o 00:02:53.001 CC lib/util/zipf.o 00:02:53.001 CC lib/util/xor.o 00:02:53.001 CC lib/util/md5.o 00:02:53.001 CC lib/vfio_user/host/vfio_user_pci.o 00:02:53.001 CC lib/vfio_user/host/vfio_user.o 00:02:53.001 LIB libspdk_dma.a 00:02:53.001 SO libspdk_dma.so.5.0 00:02:53.001 LIB libspdk_ioat.a 00:02:53.001 SO libspdk_ioat.so.7.0 00:02:53.001 SYMLINK libspdk_dma.so 00:02:53.001 SYMLINK libspdk_ioat.so 00:02:53.001 LIB libspdk_vfio_user.a 00:02:53.001 SO libspdk_vfio_user.so.5.0 00:02:53.001 SYMLINK libspdk_vfio_user.so 00:02:53.001 LIB libspdk_util.a 00:02:53.001 SO libspdk_util.so.10.0 00:02:53.001 SYMLINK libspdk_util.so 00:02:53.001 LIB libspdk_trace_parser.a 00:02:53.001 SO libspdk_trace_parser.so.6.0 00:02:53.001 CC lib/idxd/idxd.o 00:02:53.001 CC lib/idxd/idxd_user.o 00:02:53.001 CC lib/rdma_utils/rdma_utils.o 00:02:53.001 CC lib/idxd/idxd_kernel.o 00:02:53.001 CC lib/vmd/vmd.o 00:02:53.001 CC lib/vmd/led.o 00:02:53.001 CC lib/env_dpdk/env.o 00:02:53.001 CC lib/env_dpdk/memory.o 00:02:53.001 CC lib/json/json_parse.o 00:02:53.001 CC lib/env_dpdk/pci.o 00:02:53.001 CC lib/env_dpdk/init.o 00:02:53.001 CC lib/json/json_util.o 00:02:53.001 CC lib/env_dpdk/threads.o 00:02:53.001 CC lib/env_dpdk/pci_ioat.o 00:02:53.001 CC lib/json/json_write.o 00:02:53.001 CC lib/env_dpdk/pci_virtio.o 00:02:53.001 CC lib/env_dpdk/pci_vmd.o 00:02:53.001 CC lib/env_dpdk/pci_idxd.o 00:02:53.001 CC lib/conf/conf.o 00:02:53.001 CC lib/env_dpdk/pci_event.o 00:02:53.001 CC lib/env_dpdk/sigbus_handler.o 00:02:53.001 CC lib/env_dpdk/pci_dpdk.o 00:02:53.001 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:53.001 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:53.001 CC lib/rdma_provider/common.o 00:02:53.001 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:53.001 SYMLINK libspdk_trace_parser.so 00:02:53.001 LIB libspdk_conf.a 00:02:53.001 SO libspdk_conf.so.6.0 00:02:53.001 SYMLINK libspdk_conf.so 00:02:53.001 LIB libspdk_rdma_utils.a 00:02:53.001 LIB libspdk_rdma_provider.a 00:02:53.001 SO libspdk_rdma_utils.so.1.0 00:02:53.001 SO libspdk_rdma_provider.so.6.0 00:02:53.001 LIB libspdk_json.a 00:02:53.001 SO libspdk_json.so.6.0 00:02:53.001 SYMLINK libspdk_rdma_utils.so 00:02:53.001 SYMLINK libspdk_rdma_provider.so 00:02:53.001 SYMLINK libspdk_json.so 00:02:53.001 CC lib/jsonrpc/jsonrpc_server.o 00:02:53.001 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:53.001 CC lib/jsonrpc/jsonrpc_client.o 00:02:53.001 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:53.001 LIB libspdk_idxd.a 00:02:53.001 SO libspdk_idxd.so.12.1 00:02:53.001 LIB libspdk_vmd.a 00:02:53.001 SYMLINK libspdk_idxd.so 00:02:53.001 SO libspdk_vmd.so.6.0 00:02:53.001 SYMLINK libspdk_vmd.so 00:02:53.001 LIB libspdk_jsonrpc.a 00:02:53.001 SO libspdk_jsonrpc.so.6.0 00:02:53.001 SYMLINK libspdk_jsonrpc.so 00:02:53.001 CC lib/rpc/rpc.o 00:02:53.001 LIB libspdk_rpc.a 00:02:53.001 SO libspdk_rpc.so.6.0 00:02:53.259 SYMLINK libspdk_rpc.so 00:02:53.259 CC lib/trace/trace.o 00:02:53.259 CC lib/trace/trace_flags.o 00:02:53.259 CC lib/trace/trace_rpc.o 00:02:53.259 CC lib/notify/notify.o 00:02:53.259 CC lib/notify/notify_rpc.o 00:02:53.259 CC lib/keyring/keyring.o 00:02:53.259 CC lib/keyring/keyring_rpc.o 00:02:53.518 LIB libspdk_notify.a 00:02:53.518 SO libspdk_notify.so.6.0 00:02:53.518 SYMLINK libspdk_notify.so 00:02:53.518 LIB libspdk_keyring.a 00:02:53.822 SO libspdk_keyring.so.2.0 00:02:53.822 LIB libspdk_trace.a 00:02:53.822 SO libspdk_trace.so.11.0 00:02:53.822 SYMLINK libspdk_keyring.so 00:02:53.822 SYMLINK libspdk_trace.so 00:02:54.106 CC lib/thread/thread.o 00:02:54.106 CC lib/thread/iobuf.o 00:02:54.106 CC lib/sock/sock.o 00:02:54.106 CC lib/sock/sock_rpc.o 00:02:54.106 LIB libspdk_env_dpdk.a 00:02:54.366 SO libspdk_env_dpdk.so.15.1 00:02:54.627 LIB libspdk_sock.a 00:02:54.627 SYMLINK libspdk_env_dpdk.so 00:02:54.627 SO libspdk_sock.so.10.0 00:02:54.627 SYMLINK libspdk_sock.so 00:02:54.886 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:54.886 CC lib/nvme/nvme_ctrlr.o 00:02:54.886 CC lib/nvme/nvme_fabric.o 00:02:54.886 CC lib/nvme/nvme_ns_cmd.o 00:02:54.886 CC lib/nvme/nvme_ns.o 00:02:54.886 CC lib/nvme/nvme_pcie_common.o 00:02:54.886 CC lib/nvme/nvme_pcie.o 00:02:54.886 CC lib/nvme/nvme_qpair.o 00:02:54.886 CC lib/nvme/nvme.o 00:02:54.886 CC lib/nvme/nvme_quirks.o 00:02:54.886 CC lib/nvme/nvme_transport.o 00:02:54.886 CC lib/nvme/nvme_discovery.o 00:02:54.886 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:54.886 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:54.886 CC lib/nvme/nvme_tcp.o 00:02:54.886 CC lib/nvme/nvme_opal.o 00:02:54.886 CC lib/nvme/nvme_io_msg.o 00:02:54.886 CC lib/nvme/nvme_poll_group.o 00:02:54.886 CC lib/nvme/nvme_zns.o 00:02:54.886 CC lib/nvme/nvme_stubs.o 00:02:54.886 CC lib/nvme/nvme_auth.o 00:02:54.886 CC lib/nvme/nvme_cuse.o 00:02:54.886 CC lib/nvme/nvme_vfio_user.o 00:02:54.886 CC lib/nvme/nvme_rdma.o 00:02:55.824 LIB libspdk_thread.a 00:02:55.824 SO libspdk_thread.so.11.0 00:02:56.083 SYMLINK libspdk_thread.so 00:02:56.083 CC lib/vfu_tgt/tgt_endpoint.o 00:02:56.083 CC lib/virtio/virtio.o 00:02:56.083 CC lib/virtio/virtio_vhost_user.o 00:02:56.083 CC lib/vfu_tgt/tgt_rpc.o 00:02:56.083 CC lib/virtio/virtio_vfio_user.o 00:02:56.083 CC lib/blob/blobstore.o 00:02:56.083 CC lib/virtio/virtio_pci.o 00:02:56.083 CC lib/blob/request.o 00:02:56.083 CC lib/blob/zeroes.o 00:02:56.083 CC lib/accel/accel.o 00:02:56.083 CC lib/blob/blob_bs_dev.o 00:02:56.083 CC lib/fsdev/fsdev.o 00:02:56.083 CC lib/fsdev/fsdev_io.o 00:02:56.083 CC lib/accel/accel_rpc.o 00:02:56.083 CC lib/init/json_config.o 00:02:56.083 CC lib/accel/accel_sw.o 00:02:56.083 CC lib/init/subsystem.o 00:02:56.083 CC lib/fsdev/fsdev_rpc.o 00:02:56.083 CC lib/init/subsystem_rpc.o 00:02:56.083 CC lib/init/rpc.o 00:02:56.651 LIB libspdk_init.a 00:02:56.651 SO libspdk_init.so.6.0 00:02:56.651 LIB libspdk_vfu_tgt.a 00:02:56.651 SO libspdk_vfu_tgt.so.3.0 00:02:56.651 SYMLINK libspdk_init.so 00:02:56.651 SYMLINK libspdk_vfu_tgt.so 00:02:56.651 LIB libspdk_virtio.a 00:02:56.651 SO libspdk_virtio.so.7.0 00:02:56.651 SYMLINK libspdk_virtio.so 00:02:56.651 CC lib/event/app.o 00:02:56.651 CC lib/event/reactor.o 00:02:56.651 CC lib/event/app_rpc.o 00:02:56.651 CC lib/event/log_rpc.o 00:02:56.651 CC lib/event/scheduler_static.o 00:02:56.910 LIB libspdk_fsdev.a 00:02:56.910 SO libspdk_fsdev.so.2.0 00:02:56.910 SYMLINK libspdk_fsdev.so 00:02:57.169 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:57.169 LIB libspdk_event.a 00:02:57.169 SO libspdk_event.so.14.0 00:02:57.429 SYMLINK libspdk_event.so 00:02:57.429 LIB libspdk_nvme.a 00:02:57.690 SO libspdk_nvme.so.14.1 00:02:57.950 SYMLINK libspdk_nvme.so 00:02:57.950 LIB libspdk_fuse_dispatcher.a 00:02:57.950 SO libspdk_fuse_dispatcher.so.1.0 00:02:57.950 SYMLINK libspdk_fuse_dispatcher.so 00:02:57.950 LIB libspdk_accel.a 00:02:57.950 SO libspdk_accel.so.16.0 00:02:58.210 SYMLINK libspdk_accel.so 00:02:58.469 CC lib/bdev/bdev.o 00:02:58.469 CC lib/bdev/bdev_zone.o 00:02:58.469 CC lib/bdev/bdev_rpc.o 00:02:58.469 CC lib/bdev/scsi_nvme.o 00:02:58.469 CC lib/bdev/part.o 00:03:02.677 LIB libspdk_bdev.a 00:03:02.677 SO libspdk_bdev.so.17.0 00:03:02.677 LIB libspdk_blob.a 00:03:02.677 SYMLINK libspdk_bdev.so 00:03:02.677 SO libspdk_blob.so.11.0 00:03:02.677 SYMLINK libspdk_blob.so 00:03:02.677 CC lib/ublk/ublk.o 00:03:02.677 CC lib/ublk/ublk_rpc.o 00:03:02.677 CC lib/nvmf/ctrlr.o 00:03:02.677 CC lib/nvmf/ctrlr_discovery.o 00:03:02.677 CC lib/nvmf/ctrlr_bdev.o 00:03:02.677 CC lib/nvmf/subsystem.o 00:03:02.677 CC lib/nvmf/nvmf.o 00:03:02.677 CC lib/nvmf/nvmf_rpc.o 00:03:02.677 CC lib/nvmf/transport.o 00:03:02.677 CC lib/nvmf/tcp.o 00:03:02.677 CC lib/nvmf/stubs.o 00:03:02.677 CC lib/nvmf/mdns_server.o 00:03:02.677 CC lib/ftl/ftl_core.o 00:03:02.677 CC lib/scsi/dev.o 00:03:02.677 CC lib/nbd/nbd.o 00:03:02.677 CC lib/ftl/ftl_init.o 00:03:02.677 CC lib/scsi/lun.o 00:03:02.677 CC lib/nbd/nbd_rpc.o 00:03:02.677 CC lib/scsi/port.o 00:03:02.677 CC lib/nvmf/vfio_user.o 00:03:02.677 CC lib/nvmf/rdma.o 00:03:02.677 CC lib/scsi/scsi.o 00:03:02.677 CC lib/ftl/ftl_debug.o 00:03:02.677 CC lib/ftl/ftl_layout.o 00:03:02.677 CC lib/nvmf/auth.o 00:03:02.677 CC lib/scsi/scsi_bdev.o 00:03:02.677 CC lib/ftl/ftl_io.o 00:03:02.677 CC lib/ftl/ftl_sb.o 00:03:02.677 CC lib/scsi/scsi_pr.o 00:03:02.678 CC lib/scsi/scsi_rpc.o 00:03:02.678 CC lib/ftl/ftl_l2p.o 00:03:02.678 CC lib/ftl/ftl_l2p_flat.o 00:03:02.678 CC lib/ftl/ftl_nv_cache.o 00:03:02.678 CC lib/scsi/task.o 00:03:02.678 CC lib/ftl/ftl_band.o 00:03:02.678 CC lib/ftl/ftl_band_ops.o 00:03:02.678 CC lib/ftl/ftl_writer.o 00:03:02.678 CC lib/ftl/ftl_rq.o 00:03:02.678 CC lib/ftl/ftl_reloc.o 00:03:02.678 CC lib/ftl/ftl_l2p_cache.o 00:03:02.678 CC lib/ftl/ftl_p2l.o 00:03:02.678 CC lib/ftl/ftl_p2l_log.o 00:03:02.678 CC lib/ftl/mngt/ftl_mngt.o 00:03:02.678 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:02.678 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:02.678 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:02.938 CC lib/blobfs/blobfs.o 00:03:02.938 CC lib/lvol/lvol.o 00:03:02.938 CC lib/blobfs/tree.o 00:03:02.938 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:02.938 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:02.938 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:02.938 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:02.938 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:02.938 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:03.201 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:03.202 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:03.202 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:03.202 CC lib/ftl/utils/ftl_conf.o 00:03:03.202 CC lib/ftl/utils/ftl_md.o 00:03:03.202 CC lib/ftl/utils/ftl_mempool.o 00:03:03.202 CC lib/ftl/utils/ftl_bitmap.o 00:03:03.202 CC lib/ftl/utils/ftl_property.o 00:03:03.202 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:03.202 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:03.202 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:03.202 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:03.202 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:03.202 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:03.202 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:03.463 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:03.463 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:03.463 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:03.463 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:03.463 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:03.463 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:03.463 CC lib/ftl/base/ftl_base_dev.o 00:03:03.463 LIB libspdk_nbd.a 00:03:03.463 CC lib/ftl/base/ftl_base_bdev.o 00:03:03.463 CC lib/ftl/ftl_trace.o 00:03:03.463 SO libspdk_nbd.so.7.0 00:03:03.722 SYMLINK libspdk_nbd.so 00:03:03.722 LIB libspdk_scsi.a 00:03:03.722 SO libspdk_scsi.so.9.0 00:03:03.722 LIB libspdk_ublk.a 00:03:03.722 SYMLINK libspdk_scsi.so 00:03:03.722 SO libspdk_ublk.so.3.0 00:03:03.722 SYMLINK libspdk_ublk.so 00:03:03.981 CC lib/iscsi/conn.o 00:03:03.981 CC lib/iscsi/init_grp.o 00:03:03.981 CC lib/iscsi/iscsi.o 00:03:03.981 CC lib/iscsi/param.o 00:03:03.981 CC lib/iscsi/portal_grp.o 00:03:03.981 CC lib/iscsi/tgt_node.o 00:03:03.981 CC lib/vhost/vhost.o 00:03:03.981 CC lib/iscsi/iscsi_subsystem.o 00:03:03.981 CC lib/vhost/vhost_rpc.o 00:03:03.981 CC lib/iscsi/iscsi_rpc.o 00:03:03.981 CC lib/iscsi/task.o 00:03:03.981 CC lib/vhost/vhost_scsi.o 00:03:03.981 CC lib/vhost/vhost_blk.o 00:03:03.981 CC lib/vhost/rte_vhost_user.o 00:03:04.239 LIB libspdk_lvol.a 00:03:04.239 SO libspdk_lvol.so.10.0 00:03:04.240 LIB libspdk_blobfs.a 00:03:04.240 SO libspdk_blobfs.so.10.0 00:03:04.240 LIB libspdk_ftl.a 00:03:04.240 SYMLINK libspdk_lvol.so 00:03:04.240 SYMLINK libspdk_blobfs.so 00:03:04.499 SO libspdk_ftl.so.9.0 00:03:05.069 SYMLINK libspdk_ftl.so 00:03:05.330 LIB libspdk_vhost.a 00:03:05.330 SO libspdk_vhost.so.8.0 00:03:05.330 LIB libspdk_iscsi.a 00:03:05.330 SYMLINK libspdk_vhost.so 00:03:05.590 SO libspdk_iscsi.so.8.0 00:03:05.590 SYMLINK libspdk_iscsi.so 00:03:06.968 LIB libspdk_nvmf.a 00:03:06.968 SO libspdk_nvmf.so.20.0 00:03:06.968 SYMLINK libspdk_nvmf.so 00:03:07.534 CC module/env_dpdk/env_dpdk_rpc.o 00:03:07.534 CC module/vfu_device/vfu_virtio.o 00:03:07.534 CC module/vfu_device/vfu_virtio_blk.o 00:03:07.534 CC module/vfu_device/vfu_virtio_scsi.o 00:03:07.534 CC module/vfu_device/vfu_virtio_rpc.o 00:03:07.534 CC module/vfu_device/vfu_virtio_fs.o 00:03:07.534 CC module/accel/iaa/accel_iaa.o 00:03:07.534 CC module/accel/ioat/accel_ioat.o 00:03:07.534 CC module/accel/iaa/accel_iaa_rpc.o 00:03:07.534 CC module/keyring/file/keyring.o 00:03:07.534 CC module/accel/error/accel_error.o 00:03:07.534 CC module/accel/ioat/accel_ioat_rpc.o 00:03:07.534 CC module/accel/error/accel_error_rpc.o 00:03:07.534 CC module/keyring/file/keyring_rpc.o 00:03:07.534 CC module/accel/dsa/accel_dsa.o 00:03:07.534 CC module/blob/bdev/blob_bdev.o 00:03:07.534 CC module/accel/dsa/accel_dsa_rpc.o 00:03:07.534 CC module/fsdev/aio/fsdev_aio.o 00:03:07.534 CC module/fsdev/aio/linux_aio_mgr.o 00:03:07.534 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:07.534 CC module/keyring/linux/keyring.o 00:03:07.534 CC module/keyring/linux/keyring_rpc.o 00:03:07.534 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:07.534 CC module/scheduler/gscheduler/gscheduler.o 00:03:07.534 CC module/sock/posix/posix.o 00:03:07.534 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:07.534 LIB libspdk_env_dpdk_rpc.a 00:03:07.534 SO libspdk_env_dpdk_rpc.so.6.0 00:03:07.534 SYMLINK libspdk_env_dpdk_rpc.so 00:03:07.792 LIB libspdk_keyring_linux.a 00:03:07.792 LIB libspdk_scheduler_dpdk_governor.a 00:03:07.792 SO libspdk_keyring_linux.so.1.0 00:03:07.792 LIB libspdk_accel_error.a 00:03:07.792 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:07.792 LIB libspdk_accel_ioat.a 00:03:07.792 LIB libspdk_accel_iaa.a 00:03:07.792 SO libspdk_accel_error.so.2.0 00:03:07.792 LIB libspdk_keyring_file.a 00:03:07.792 SYMLINK libspdk_keyring_linux.so 00:03:07.792 SO libspdk_accel_iaa.so.3.0 00:03:07.792 SO libspdk_accel_ioat.so.6.0 00:03:07.792 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:07.792 LIB libspdk_scheduler_gscheduler.a 00:03:07.792 SO libspdk_keyring_file.so.2.0 00:03:07.792 SYMLINK libspdk_accel_error.so 00:03:07.792 SO libspdk_scheduler_gscheduler.so.4.0 00:03:07.792 SYMLINK libspdk_accel_ioat.so 00:03:07.792 SYMLINK libspdk_accel_iaa.so 00:03:07.792 LIB libspdk_blob_bdev.a 00:03:07.792 LIB libspdk_scheduler_dynamic.a 00:03:07.792 SYMLINK libspdk_keyring_file.so 00:03:07.792 SYMLINK libspdk_scheduler_gscheduler.so 00:03:07.792 SO libspdk_blob_bdev.so.11.0 00:03:07.792 LIB libspdk_accel_dsa.a 00:03:07.792 SO libspdk_scheduler_dynamic.so.4.0 00:03:07.792 SO libspdk_accel_dsa.so.5.0 00:03:07.792 SYMLINK libspdk_blob_bdev.so 00:03:08.050 SYMLINK libspdk_accel_dsa.so 00:03:08.050 SYMLINK libspdk_scheduler_dynamic.so 00:03:08.310 CC module/bdev/delay/vbdev_delay.o 00:03:08.310 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:08.310 CC module/bdev/passthru/vbdev_passthru.o 00:03:08.310 CC module/bdev/lvol/vbdev_lvol.o 00:03:08.310 CC module/bdev/error/vbdev_error.o 00:03:08.310 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:08.310 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:08.310 CC module/bdev/gpt/gpt.o 00:03:08.310 CC module/bdev/error/vbdev_error_rpc.o 00:03:08.310 CC module/bdev/split/vbdev_split.o 00:03:08.310 CC module/bdev/malloc/bdev_malloc.o 00:03:08.310 CC module/bdev/gpt/vbdev_gpt.o 00:03:08.310 CC module/bdev/split/vbdev_split_rpc.o 00:03:08.310 CC module/bdev/nvme/bdev_nvme.o 00:03:08.310 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:08.310 CC module/bdev/raid/bdev_raid.o 00:03:08.310 CC module/bdev/raid/bdev_raid_sb.o 00:03:08.310 CC module/bdev/raid/bdev_raid_rpc.o 00:03:08.310 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:08.310 CC module/bdev/nvme/nvme_rpc.o 00:03:08.310 CC module/bdev/raid/raid0.o 00:03:08.310 CC module/bdev/iscsi/bdev_iscsi.o 00:03:08.310 CC module/bdev/nvme/bdev_mdns_client.o 00:03:08.310 CC module/bdev/raid/raid1.o 00:03:08.310 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:08.310 CC module/bdev/aio/bdev_aio.o 00:03:08.310 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:08.310 CC module/bdev/nvme/vbdev_opal.o 00:03:08.310 CC module/bdev/aio/bdev_aio_rpc.o 00:03:08.310 CC module/bdev/raid/concat.o 00:03:08.310 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:08.310 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:08.310 CC module/bdev/ftl/bdev_ftl.o 00:03:08.310 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:08.310 CC module/bdev/null/bdev_null.o 00:03:08.310 CC module/blobfs/bdev/blobfs_bdev.o 00:03:08.310 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:08.310 CC module/bdev/null/bdev_null_rpc.o 00:03:08.310 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:08.310 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:08.310 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:08.310 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:08.310 LIB libspdk_fsdev_aio.a 00:03:08.310 SO libspdk_fsdev_aio.so.1.0 00:03:08.310 SYMLINK libspdk_fsdev_aio.so 00:03:08.310 LIB libspdk_vfu_device.a 00:03:08.310 LIB libspdk_sock_posix.a 00:03:08.310 SO libspdk_vfu_device.so.3.0 00:03:08.568 SO libspdk_sock_posix.so.6.0 00:03:08.568 SYMLINK libspdk_vfu_device.so 00:03:08.568 SYMLINK libspdk_sock_posix.so 00:03:08.568 LIB libspdk_blobfs_bdev.a 00:03:08.568 SO libspdk_blobfs_bdev.so.6.0 00:03:08.568 LIB libspdk_bdev_gpt.a 00:03:08.568 LIB libspdk_bdev_split.a 00:03:08.827 SYMLINK libspdk_blobfs_bdev.so 00:03:08.827 SO libspdk_bdev_gpt.so.6.0 00:03:08.827 SO libspdk_bdev_split.so.6.0 00:03:08.827 LIB libspdk_bdev_ftl.a 00:03:08.827 LIB libspdk_bdev_aio.a 00:03:08.827 LIB libspdk_bdev_null.a 00:03:08.827 LIB libspdk_bdev_error.a 00:03:08.827 SO libspdk_bdev_aio.so.6.0 00:03:08.827 SO libspdk_bdev_ftl.so.6.0 00:03:08.827 LIB libspdk_bdev_passthru.a 00:03:08.827 SO libspdk_bdev_null.so.6.0 00:03:08.827 SO libspdk_bdev_error.so.6.0 00:03:08.827 SYMLINK libspdk_bdev_gpt.so 00:03:08.827 SO libspdk_bdev_passthru.so.6.0 00:03:08.827 SYMLINK libspdk_bdev_split.so 00:03:08.827 LIB libspdk_bdev_zone_block.a 00:03:08.827 SYMLINK libspdk_bdev_aio.so 00:03:08.827 SYMLINK libspdk_bdev_ftl.so 00:03:08.827 SYMLINK libspdk_bdev_error.so 00:03:08.827 SYMLINK libspdk_bdev_null.so 00:03:08.827 SO libspdk_bdev_zone_block.so.6.0 00:03:08.827 SYMLINK libspdk_bdev_passthru.so 00:03:08.827 LIB libspdk_bdev_malloc.a 00:03:08.827 SO libspdk_bdev_malloc.so.6.0 00:03:08.827 LIB libspdk_bdev_delay.a 00:03:08.827 SYMLINK libspdk_bdev_zone_block.so 00:03:08.827 SO libspdk_bdev_delay.so.6.0 00:03:08.827 SYMLINK libspdk_bdev_malloc.so 00:03:09.087 SYMLINK libspdk_bdev_delay.so 00:03:09.087 LIB libspdk_bdev_iscsi.a 00:03:09.087 SO libspdk_bdev_iscsi.so.6.0 00:03:09.087 LIB libspdk_bdev_lvol.a 00:03:09.087 SO libspdk_bdev_lvol.so.6.0 00:03:09.087 SYMLINK libspdk_bdev_iscsi.so 00:03:09.087 SYMLINK libspdk_bdev_lvol.so 00:03:09.087 LIB libspdk_bdev_virtio.a 00:03:09.348 SO libspdk_bdev_virtio.so.6.0 00:03:09.348 SYMLINK libspdk_bdev_virtio.so 00:03:09.918 LIB libspdk_bdev_raid.a 00:03:09.918 SO libspdk_bdev_raid.so.6.0 00:03:10.179 SYMLINK libspdk_bdev_raid.so 00:03:14.380 LIB libspdk_bdev_nvme.a 00:03:14.380 SO libspdk_bdev_nvme.so.7.1 00:03:14.380 SYMLINK libspdk_bdev_nvme.so 00:03:14.638 CC module/event/subsystems/sock/sock.o 00:03:14.638 CC module/event/subsystems/keyring/keyring.o 00:03:14.638 CC module/event/subsystems/vmd/vmd.o 00:03:14.638 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:14.638 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:14.638 CC module/event/subsystems/fsdev/fsdev.o 00:03:14.638 CC module/event/subsystems/iobuf/iobuf.o 00:03:14.638 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:14.638 CC module/event/subsystems/scheduler/scheduler.o 00:03:14.638 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:14.897 LIB libspdk_event_keyring.a 00:03:14.897 LIB libspdk_event_sock.a 00:03:14.897 LIB libspdk_event_vhost_blk.a 00:03:14.897 LIB libspdk_event_fsdev.a 00:03:14.897 SO libspdk_event_keyring.so.1.0 00:03:14.897 LIB libspdk_event_vmd.a 00:03:14.897 SO libspdk_event_sock.so.5.0 00:03:14.897 SO libspdk_event_vhost_blk.so.3.0 00:03:14.897 LIB libspdk_event_scheduler.a 00:03:14.897 LIB libspdk_event_vfu_tgt.a 00:03:14.897 SO libspdk_event_fsdev.so.1.0 00:03:14.897 SO libspdk_event_vmd.so.6.0 00:03:14.897 LIB libspdk_event_iobuf.a 00:03:14.897 SO libspdk_event_scheduler.so.4.0 00:03:14.897 SO libspdk_event_vfu_tgt.so.3.0 00:03:14.897 SYMLINK libspdk_event_keyring.so 00:03:14.897 SYMLINK libspdk_event_vhost_blk.so 00:03:14.897 SO libspdk_event_iobuf.so.3.0 00:03:14.897 SYMLINK libspdk_event_sock.so 00:03:14.897 SYMLINK libspdk_event_vmd.so 00:03:14.897 SYMLINK libspdk_event_fsdev.so 00:03:14.897 SYMLINK libspdk_event_scheduler.so 00:03:14.897 SYMLINK libspdk_event_vfu_tgt.so 00:03:15.157 SYMLINK libspdk_event_iobuf.so 00:03:15.418 CC module/event/subsystems/accel/accel.o 00:03:15.678 LIB libspdk_event_accel.a 00:03:15.678 SO libspdk_event_accel.so.6.0 00:03:15.939 SYMLINK libspdk_event_accel.so 00:03:16.198 CC module/event/subsystems/bdev/bdev.o 00:03:16.459 LIB libspdk_event_bdev.a 00:03:16.459 SO libspdk_event_bdev.so.6.0 00:03:16.719 SYMLINK libspdk_event_bdev.so 00:03:16.977 CC module/event/subsystems/ublk/ublk.o 00:03:16.977 CC module/event/subsystems/scsi/scsi.o 00:03:16.977 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:16.978 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:16.978 CC module/event/subsystems/nbd/nbd.o 00:03:16.978 LIB libspdk_event_ublk.a 00:03:16.978 SO libspdk_event_ublk.so.3.0 00:03:16.978 LIB libspdk_event_scsi.a 00:03:16.978 SO libspdk_event_scsi.so.6.0 00:03:16.978 LIB libspdk_event_nbd.a 00:03:17.237 SYMLINK libspdk_event_ublk.so 00:03:17.237 SO libspdk_event_nbd.so.6.0 00:03:17.237 SYMLINK libspdk_event_scsi.so 00:03:17.237 SYMLINK libspdk_event_nbd.so 00:03:17.237 LIB libspdk_event_nvmf.a 00:03:17.237 SO libspdk_event_nvmf.so.6.0 00:03:17.497 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:17.497 CC module/event/subsystems/iscsi/iscsi.o 00:03:17.497 SYMLINK libspdk_event_nvmf.so 00:03:17.756 LIB libspdk_event_iscsi.a 00:03:17.756 SO libspdk_event_iscsi.so.6.0 00:03:17.756 LIB libspdk_event_vhost_scsi.a 00:03:17.756 SO libspdk_event_vhost_scsi.so.3.0 00:03:17.756 SYMLINK libspdk_event_iscsi.so 00:03:17.756 SYMLINK libspdk_event_vhost_scsi.so 00:03:18.014 SO libspdk.so.6.0 00:03:18.014 SYMLINK libspdk.so 00:03:18.277 CC app/trace_record/trace_record.o 00:03:18.277 CC app/spdk_top/spdk_top.o 00:03:18.277 CC app/spdk_nvme_identify/identify.o 00:03:18.277 CC test/rpc_client/rpc_client_test.o 00:03:18.277 CC app/spdk_nvme_perf/perf.o 00:03:18.277 CXX app/trace/trace.o 00:03:18.277 TEST_HEADER include/spdk/accel_module.h 00:03:18.277 TEST_HEADER include/spdk/accel.h 00:03:18.277 TEST_HEADER include/spdk/assert.h 00:03:18.277 TEST_HEADER include/spdk/barrier.h 00:03:18.277 CC app/spdk_lspci/spdk_lspci.o 00:03:18.277 TEST_HEADER include/spdk/base64.h 00:03:18.277 TEST_HEADER include/spdk/bdev.h 00:03:18.277 TEST_HEADER include/spdk/bdev_module.h 00:03:18.277 TEST_HEADER include/spdk/bdev_zone.h 00:03:18.277 CC app/spdk_nvme_discover/discovery_aer.o 00:03:18.277 TEST_HEADER include/spdk/bit_array.h 00:03:18.277 TEST_HEADER include/spdk/bit_pool.h 00:03:18.277 TEST_HEADER include/spdk/blob_bdev.h 00:03:18.277 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:18.277 TEST_HEADER include/spdk/blobfs.h 00:03:18.277 TEST_HEADER include/spdk/blob.h 00:03:18.277 TEST_HEADER include/spdk/conf.h 00:03:18.277 TEST_HEADER include/spdk/config.h 00:03:18.277 TEST_HEADER include/spdk/cpuset.h 00:03:18.277 TEST_HEADER include/spdk/crc16.h 00:03:18.277 TEST_HEADER include/spdk/crc32.h 00:03:18.277 TEST_HEADER include/spdk/crc64.h 00:03:18.277 TEST_HEADER include/spdk/dma.h 00:03:18.277 TEST_HEADER include/spdk/dif.h 00:03:18.277 TEST_HEADER include/spdk/endian.h 00:03:18.277 TEST_HEADER include/spdk/env_dpdk.h 00:03:18.277 TEST_HEADER include/spdk/env.h 00:03:18.277 TEST_HEADER include/spdk/event.h 00:03:18.277 TEST_HEADER include/spdk/fd_group.h 00:03:18.277 TEST_HEADER include/spdk/fd.h 00:03:18.277 TEST_HEADER include/spdk/file.h 00:03:18.277 TEST_HEADER include/spdk/fsdev.h 00:03:18.277 TEST_HEADER include/spdk/fsdev_module.h 00:03:18.277 TEST_HEADER include/spdk/ftl.h 00:03:18.277 TEST_HEADER include/spdk/gpt_spec.h 00:03:18.277 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:18.277 TEST_HEADER include/spdk/hexlify.h 00:03:18.277 TEST_HEADER include/spdk/histogram_data.h 00:03:18.277 TEST_HEADER include/spdk/idxd.h 00:03:18.277 TEST_HEADER include/spdk/idxd_spec.h 00:03:18.277 TEST_HEADER include/spdk/init.h 00:03:18.277 TEST_HEADER include/spdk/ioat.h 00:03:18.277 TEST_HEADER include/spdk/ioat_spec.h 00:03:18.277 TEST_HEADER include/spdk/iscsi_spec.h 00:03:18.277 TEST_HEADER include/spdk/json.h 00:03:18.277 TEST_HEADER include/spdk/jsonrpc.h 00:03:18.277 TEST_HEADER include/spdk/keyring.h 00:03:18.277 TEST_HEADER include/spdk/keyring_module.h 00:03:18.277 TEST_HEADER include/spdk/likely.h 00:03:18.277 TEST_HEADER include/spdk/lvol.h 00:03:18.277 TEST_HEADER include/spdk/log.h 00:03:18.277 TEST_HEADER include/spdk/md5.h 00:03:18.277 TEST_HEADER include/spdk/memory.h 00:03:18.277 TEST_HEADER include/spdk/mmio.h 00:03:18.277 TEST_HEADER include/spdk/nbd.h 00:03:18.277 TEST_HEADER include/spdk/net.h 00:03:18.277 TEST_HEADER include/spdk/notify.h 00:03:18.277 TEST_HEADER include/spdk/nvme.h 00:03:18.277 TEST_HEADER include/spdk/nvme_intel.h 00:03:18.277 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:18.277 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:18.277 TEST_HEADER include/spdk/nvme_zns.h 00:03:18.277 TEST_HEADER include/spdk/nvme_spec.h 00:03:18.277 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:18.277 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:18.277 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:18.277 TEST_HEADER include/spdk/nvmf.h 00:03:18.277 TEST_HEADER include/spdk/nvmf_spec.h 00:03:18.277 TEST_HEADER include/spdk/nvmf_transport.h 00:03:18.277 TEST_HEADER include/spdk/opal_spec.h 00:03:18.277 TEST_HEADER include/spdk/opal.h 00:03:18.277 TEST_HEADER include/spdk/pci_ids.h 00:03:18.277 TEST_HEADER include/spdk/pipe.h 00:03:18.277 TEST_HEADER include/spdk/queue.h 00:03:18.277 TEST_HEADER include/spdk/reduce.h 00:03:18.277 TEST_HEADER include/spdk/rpc.h 00:03:18.277 TEST_HEADER include/spdk/scheduler.h 00:03:18.277 TEST_HEADER include/spdk/scsi.h 00:03:18.277 TEST_HEADER include/spdk/scsi_spec.h 00:03:18.277 TEST_HEADER include/spdk/stdinc.h 00:03:18.277 TEST_HEADER include/spdk/sock.h 00:03:18.277 TEST_HEADER include/spdk/string.h 00:03:18.277 TEST_HEADER include/spdk/thread.h 00:03:18.277 TEST_HEADER include/spdk/trace.h 00:03:18.277 TEST_HEADER include/spdk/trace_parser.h 00:03:18.277 TEST_HEADER include/spdk/tree.h 00:03:18.277 CC app/spdk_dd/spdk_dd.o 00:03:18.277 TEST_HEADER include/spdk/ublk.h 00:03:18.277 TEST_HEADER include/spdk/util.h 00:03:18.277 TEST_HEADER include/spdk/uuid.h 00:03:18.277 TEST_HEADER include/spdk/version.h 00:03:18.277 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:18.277 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:18.277 TEST_HEADER include/spdk/vhost.h 00:03:18.277 TEST_HEADER include/spdk/vmd.h 00:03:18.277 TEST_HEADER include/spdk/xor.h 00:03:18.277 TEST_HEADER include/spdk/zipf.h 00:03:18.277 CXX test/cpp_headers/accel.o 00:03:18.277 CXX test/cpp_headers/accel_module.o 00:03:18.277 CXX test/cpp_headers/assert.o 00:03:18.277 CXX test/cpp_headers/barrier.o 00:03:18.277 CXX test/cpp_headers/bdev.o 00:03:18.277 CXX test/cpp_headers/base64.o 00:03:18.277 CXX test/cpp_headers/bdev_module.o 00:03:18.277 CXX test/cpp_headers/bdev_zone.o 00:03:18.277 CXX test/cpp_headers/bit_array.o 00:03:18.277 CXX test/cpp_headers/bit_pool.o 00:03:18.277 CXX test/cpp_headers/blob_bdev.o 00:03:18.277 CXX test/cpp_headers/blobfs_bdev.o 00:03:18.277 CXX test/cpp_headers/blobfs.o 00:03:18.277 CXX test/cpp_headers/blob.o 00:03:18.278 CXX test/cpp_headers/conf.o 00:03:18.278 CXX test/cpp_headers/config.o 00:03:18.278 CXX test/cpp_headers/cpuset.o 00:03:18.278 CXX test/cpp_headers/crc16.o 00:03:18.278 CC app/nvmf_tgt/nvmf_main.o 00:03:18.278 CC app/iscsi_tgt/iscsi_tgt.o 00:03:18.278 CC examples/ioat/verify/verify.o 00:03:18.278 CXX test/cpp_headers/crc32.o 00:03:18.278 CC examples/ioat/perf/perf.o 00:03:18.278 CC test/thread/poller_perf/poller_perf.o 00:03:18.278 CC examples/util/zipf/zipf.o 00:03:18.278 CC app/spdk_tgt/spdk_tgt.o 00:03:18.278 CC test/env/memory/memory_ut.o 00:03:18.278 CC test/app/histogram_perf/histogram_perf.o 00:03:18.278 CC test/app/jsoncat/jsoncat.o 00:03:18.278 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:18.278 CC test/app/stub/stub.o 00:03:18.535 CC test/env/pci/pci_ut.o 00:03:18.535 CC app/fio/nvme/fio_plugin.o 00:03:18.535 CC test/env/vtophys/vtophys.o 00:03:18.535 CC app/fio/bdev/fio_plugin.o 00:03:18.535 CC test/dma/test_dma/test_dma.o 00:03:18.535 CC test/app/bdev_svc/bdev_svc.o 00:03:18.535 CC test/env/mem_callbacks/mem_callbacks.o 00:03:18.535 LINK spdk_lspci 00:03:18.795 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:18.795 LINK interrupt_tgt 00:03:18.795 LINK rpc_client_test 00:03:18.795 LINK spdk_nvme_discover 00:03:18.795 LINK jsoncat 00:03:18.795 LINK poller_perf 00:03:18.795 CXX test/cpp_headers/crc64.o 00:03:18.795 LINK histogram_perf 00:03:18.795 CXX test/cpp_headers/dif.o 00:03:18.795 LINK nvmf_tgt 00:03:18.795 LINK zipf 00:03:18.795 CXX test/cpp_headers/dma.o 00:03:18.795 CXX test/cpp_headers/endian.o 00:03:18.795 LINK vtophys 00:03:18.795 CXX test/cpp_headers/env.o 00:03:18.795 CXX test/cpp_headers/env_dpdk.o 00:03:18.795 LINK env_dpdk_post_init 00:03:18.795 CXX test/cpp_headers/event.o 00:03:18.795 CXX test/cpp_headers/fd_group.o 00:03:18.795 LINK stub 00:03:18.795 CXX test/cpp_headers/fd.o 00:03:18.795 LINK iscsi_tgt 00:03:18.795 CXX test/cpp_headers/file.o 00:03:18.795 CXX test/cpp_headers/fsdev.o 00:03:18.795 LINK spdk_trace_record 00:03:18.795 CXX test/cpp_headers/fsdev_module.o 00:03:18.795 CXX test/cpp_headers/ftl.o 00:03:18.795 CXX test/cpp_headers/fuse_dispatcher.o 00:03:18.795 CXX test/cpp_headers/gpt_spec.o 00:03:18.795 LINK ioat_perf 00:03:18.795 LINK verify 00:03:18.795 CXX test/cpp_headers/hexlify.o 00:03:18.795 LINK spdk_tgt 00:03:19.060 LINK bdev_svc 00:03:19.060 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:19.060 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:19.060 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:19.060 CXX test/cpp_headers/histogram_data.o 00:03:19.060 CXX test/cpp_headers/idxd.o 00:03:19.060 CXX test/cpp_headers/idxd_spec.o 00:03:19.060 CXX test/cpp_headers/ioat.o 00:03:19.060 CXX test/cpp_headers/init.o 00:03:19.060 CXX test/cpp_headers/ioat_spec.o 00:03:19.060 CXX test/cpp_headers/iscsi_spec.o 00:03:19.060 LINK spdk_dd 00:03:19.060 CXX test/cpp_headers/json.o 00:03:19.060 CXX test/cpp_headers/jsonrpc.o 00:03:19.060 LINK spdk_trace 00:03:19.060 CXX test/cpp_headers/keyring.o 00:03:19.320 CXX test/cpp_headers/keyring_module.o 00:03:19.320 CXX test/cpp_headers/likely.o 00:03:19.320 CXX test/cpp_headers/log.o 00:03:19.320 CXX test/cpp_headers/lvol.o 00:03:19.320 CXX test/cpp_headers/md5.o 00:03:19.320 CXX test/cpp_headers/memory.o 00:03:19.320 CXX test/cpp_headers/mmio.o 00:03:19.320 CXX test/cpp_headers/nbd.o 00:03:19.320 LINK pci_ut 00:03:19.320 CXX test/cpp_headers/net.o 00:03:19.320 CXX test/cpp_headers/notify.o 00:03:19.320 CXX test/cpp_headers/nvme.o 00:03:19.320 CXX test/cpp_headers/nvme_intel.o 00:03:19.320 CXX test/cpp_headers/nvme_ocssd.o 00:03:19.320 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:19.320 CXX test/cpp_headers/nvme_spec.o 00:03:19.320 CXX test/cpp_headers/nvme_zns.o 00:03:19.320 CXX test/cpp_headers/nvmf_cmd.o 00:03:19.320 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:19.320 CXX test/cpp_headers/nvmf.o 00:03:19.583 CXX test/cpp_headers/nvmf_spec.o 00:03:19.583 CXX test/cpp_headers/nvmf_transport.o 00:03:19.583 CC test/event/event_perf/event_perf.o 00:03:19.583 LINK spdk_nvme 00:03:19.583 CC examples/sock/hello_world/hello_sock.o 00:03:19.583 CXX test/cpp_headers/opal.o 00:03:19.583 CXX test/cpp_headers/opal_spec.o 00:03:19.583 CC test/event/reactor/reactor.o 00:03:19.583 LINK nvme_fuzz 00:03:19.583 LINK test_dma 00:03:19.583 CC examples/vmd/lsvmd/lsvmd.o 00:03:19.583 CC examples/idxd/perf/perf.o 00:03:19.583 CXX test/cpp_headers/pci_ids.o 00:03:19.583 LINK spdk_bdev 00:03:19.583 CC examples/thread/thread/thread_ex.o 00:03:19.583 CXX test/cpp_headers/pipe.o 00:03:19.583 CC examples/vmd/led/led.o 00:03:19.583 CXX test/cpp_headers/queue.o 00:03:19.583 CXX test/cpp_headers/reduce.o 00:03:19.583 CC test/event/reactor_perf/reactor_perf.o 00:03:19.583 CXX test/cpp_headers/rpc.o 00:03:19.583 CXX test/cpp_headers/scheduler.o 00:03:19.583 CXX test/cpp_headers/scsi.o 00:03:19.583 CXX test/cpp_headers/scsi_spec.o 00:03:19.583 CXX test/cpp_headers/sock.o 00:03:19.583 CXX test/cpp_headers/stdinc.o 00:03:19.846 CXX test/cpp_headers/string.o 00:03:19.846 CXX test/cpp_headers/thread.o 00:03:19.846 CXX test/cpp_headers/trace.o 00:03:19.846 CC test/event/app_repeat/app_repeat.o 00:03:19.846 CXX test/cpp_headers/trace_parser.o 00:03:19.846 CXX test/cpp_headers/tree.o 00:03:19.846 CXX test/cpp_headers/ublk.o 00:03:19.846 CXX test/cpp_headers/util.o 00:03:19.846 CXX test/cpp_headers/uuid.o 00:03:19.846 CC app/vhost/vhost.o 00:03:19.846 CXX test/cpp_headers/version.o 00:03:19.846 CC test/event/scheduler/scheduler.o 00:03:19.846 CXX test/cpp_headers/vfio_user_pci.o 00:03:19.846 CXX test/cpp_headers/vfio_user_spec.o 00:03:19.846 CXX test/cpp_headers/vhost.o 00:03:19.846 LINK mem_callbacks 00:03:19.846 CXX test/cpp_headers/vmd.o 00:03:19.846 CXX test/cpp_headers/xor.o 00:03:19.846 CXX test/cpp_headers/zipf.o 00:03:19.846 LINK vhost_fuzz 00:03:19.846 LINK event_perf 00:03:19.846 LINK lsvmd 00:03:19.846 LINK reactor 00:03:19.846 LINK spdk_nvme_perf 00:03:19.846 LINK spdk_nvme_identify 00:03:20.105 LINK reactor_perf 00:03:20.105 LINK led 00:03:20.105 LINK spdk_top 00:03:20.105 LINK hello_sock 00:03:20.105 LINK app_repeat 00:03:20.105 LINK thread 00:03:20.105 LINK vhost 00:03:20.363 CC test/nvme/overhead/overhead.o 00:03:20.363 CC test/nvme/reset/reset.o 00:03:20.363 CC test/nvme/aer/aer.o 00:03:20.363 CC test/nvme/err_injection/err_injection.o 00:03:20.363 CC test/nvme/simple_copy/simple_copy.o 00:03:20.363 CC test/nvme/sgl/sgl.o 00:03:20.363 CC test/nvme/connect_stress/connect_stress.o 00:03:20.363 CC test/nvme/boot_partition/boot_partition.o 00:03:20.363 CC test/nvme/fused_ordering/fused_ordering.o 00:03:20.363 CC test/nvme/e2edp/nvme_dp.o 00:03:20.363 CC test/nvme/startup/startup.o 00:03:20.363 CC test/nvme/reserve/reserve.o 00:03:20.363 LINK idxd_perf 00:03:20.363 CC test/nvme/compliance/nvme_compliance.o 00:03:20.363 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:20.363 CC test/nvme/cuse/cuse.o 00:03:20.363 CC test/nvme/fdp/fdp.o 00:03:20.363 LINK scheduler 00:03:20.363 CC test/accel/dif/dif.o 00:03:20.363 CC test/blobfs/mkfs/mkfs.o 00:03:20.363 CC test/lvol/esnap/esnap.o 00:03:20.363 LINK connect_stress 00:03:20.622 LINK err_injection 00:03:20.622 LINK doorbell_aers 00:03:20.622 LINK fused_ordering 00:03:20.622 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:20.622 CC examples/nvme/arbitration/arbitration.o 00:03:20.622 CC examples/nvme/abort/abort.o 00:03:20.622 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:20.622 CC examples/nvme/hotplug/hotplug.o 00:03:20.622 CC examples/nvme/hello_world/hello_world.o 00:03:20.622 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:20.622 CC examples/nvme/reconnect/reconnect.o 00:03:20.622 LINK boot_partition 00:03:20.622 LINK mkfs 00:03:20.622 LINK startup 00:03:20.622 LINK reset 00:03:20.622 LINK overhead 00:03:20.622 LINK reserve 00:03:20.622 LINK nvme_compliance 00:03:20.622 LINK nvme_dp 00:03:20.622 CC examples/accel/perf/accel_perf.o 00:03:20.622 LINK aer 00:03:20.622 LINK simple_copy 00:03:20.622 CC examples/blob/cli/blobcli.o 00:03:20.622 CC examples/blob/hello_world/hello_blob.o 00:03:20.622 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:20.622 LINK sgl 00:03:20.622 LINK memory_ut 00:03:20.881 LINK hello_world 00:03:20.881 LINK pmr_persistence 00:03:20.881 LINK fdp 00:03:20.881 LINK cmb_copy 00:03:20.881 LINK hotplug 00:03:20.881 LINK reconnect 00:03:21.139 LINK abort 00:03:21.139 LINK arbitration 00:03:21.139 LINK hello_blob 00:03:21.139 LINK hello_fsdev 00:03:21.139 LINK dif 00:03:21.398 LINK blobcli 00:03:21.398 LINK nvme_manage 00:03:21.398 LINK accel_perf 00:03:21.726 CC test/bdev/bdevio/bdevio.o 00:03:21.726 LINK iscsi_fuzz 00:03:22.014 CC examples/bdev/hello_world/hello_bdev.o 00:03:22.014 CC examples/bdev/bdevperf/bdevperf.o 00:03:22.014 LINK bdevio 00:03:22.273 LINK hello_bdev 00:03:22.273 LINK cuse 00:03:23.213 LINK bdevperf 00:03:24.156 CC examples/nvmf/nvmf/nvmf.o 00:03:24.416 LINK nvmf 00:03:30.991 LINK esnap 00:03:30.991 00:03:30.991 real 1m51.774s 00:03:30.991 user 14m11.840s 00:03:30.991 sys 2m54.918s 00:03:30.991 14:59:17 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:30.991 14:59:17 make -- common/autotest_common.sh@10 -- $ set +x 00:03:30.991 ************************************ 00:03:30.991 END TEST make 00:03:30.991 ************************************ 00:03:30.991 14:59:17 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:30.991 14:59:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:30.991 14:59:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:30.991 14:59:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.991 14:59:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:30.991 14:59:17 -- pm/common@44 -- $ pid=2952342 00:03:30.991 14:59:17 -- pm/common@50 -- $ kill -TERM 2952342 00:03:30.991 14:59:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.991 14:59:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:30.991 14:59:17 -- pm/common@44 -- $ pid=2952344 00:03:30.991 14:59:17 -- pm/common@50 -- $ kill -TERM 2952344 00:03:30.991 14:59:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.991 14:59:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:30.991 14:59:17 -- pm/common@44 -- $ pid=2952345 00:03:30.991 14:59:17 -- pm/common@50 -- $ kill -TERM 2952345 00:03:30.991 14:59:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.991 14:59:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:30.991 14:59:17 -- pm/common@44 -- $ pid=2952377 00:03:30.991 14:59:17 -- pm/common@50 -- $ sudo -E kill -TERM 2952377 00:03:31.252 14:59:17 -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:03:31.252 14:59:17 -- common/autotest_common.sh@1689 -- # lcov --version 00:03:31.252 14:59:17 -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:03:31.252 14:59:18 -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:03:31.252 14:59:18 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:31.252 14:59:18 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:31.252 14:59:18 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:31.252 14:59:18 -- scripts/common.sh@336 -- # IFS=.-: 00:03:31.252 14:59:18 -- scripts/common.sh@336 -- # read -ra ver1 00:03:31.252 14:59:18 -- scripts/common.sh@337 -- # IFS=.-: 00:03:31.252 14:59:18 -- scripts/common.sh@337 -- # read -ra ver2 00:03:31.252 14:59:18 -- scripts/common.sh@338 -- # local 'op=<' 00:03:31.252 14:59:18 -- scripts/common.sh@340 -- # ver1_l=2 00:03:31.252 14:59:18 -- scripts/common.sh@341 -- # ver2_l=1 00:03:31.252 14:59:18 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:31.252 14:59:18 -- scripts/common.sh@344 -- # case "$op" in 00:03:31.252 14:59:18 -- scripts/common.sh@345 -- # : 1 00:03:31.252 14:59:18 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:31.252 14:59:18 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:31.252 14:59:18 -- scripts/common.sh@365 -- # decimal 1 00:03:31.252 14:59:18 -- scripts/common.sh@353 -- # local d=1 00:03:31.252 14:59:18 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:31.252 14:59:18 -- scripts/common.sh@355 -- # echo 1 00:03:31.252 14:59:18 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:31.252 14:59:18 -- scripts/common.sh@366 -- # decimal 2 00:03:31.252 14:59:18 -- scripts/common.sh@353 -- # local d=2 00:03:31.252 14:59:18 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:31.252 14:59:18 -- scripts/common.sh@355 -- # echo 2 00:03:31.252 14:59:18 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:31.252 14:59:18 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:31.252 14:59:18 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:31.252 14:59:18 -- scripts/common.sh@368 -- # return 0 00:03:31.252 14:59:18 -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:31.252 14:59:18 -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:03:31.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.252 --rc genhtml_branch_coverage=1 00:03:31.252 --rc genhtml_function_coverage=1 00:03:31.252 --rc genhtml_legend=1 00:03:31.252 --rc geninfo_all_blocks=1 00:03:31.252 --rc geninfo_unexecuted_blocks=1 00:03:31.252 00:03:31.252 ' 00:03:31.252 14:59:18 -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:03:31.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.252 --rc genhtml_branch_coverage=1 00:03:31.252 --rc genhtml_function_coverage=1 00:03:31.252 --rc genhtml_legend=1 00:03:31.252 --rc geninfo_all_blocks=1 00:03:31.252 --rc geninfo_unexecuted_blocks=1 00:03:31.252 00:03:31.252 ' 00:03:31.252 14:59:18 -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:03:31.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.252 --rc genhtml_branch_coverage=1 00:03:31.252 --rc genhtml_function_coverage=1 00:03:31.252 --rc genhtml_legend=1 00:03:31.252 --rc geninfo_all_blocks=1 00:03:31.252 --rc geninfo_unexecuted_blocks=1 00:03:31.252 00:03:31.252 ' 00:03:31.252 14:59:18 -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:03:31.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.252 --rc genhtml_branch_coverage=1 00:03:31.252 --rc genhtml_function_coverage=1 00:03:31.252 --rc genhtml_legend=1 00:03:31.252 --rc geninfo_all_blocks=1 00:03:31.252 --rc geninfo_unexecuted_blocks=1 00:03:31.252 00:03:31.252 ' 00:03:31.252 14:59:18 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:31.252 14:59:18 -- nvmf/common.sh@7 -- # uname -s 00:03:31.252 14:59:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:31.252 14:59:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:31.252 14:59:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:31.252 14:59:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:31.252 14:59:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:31.252 14:59:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:31.252 14:59:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:31.252 14:59:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:31.252 14:59:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:31.252 14:59:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:31.252 14:59:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:03:31.252 14:59:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:03:31.252 14:59:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:31.252 14:59:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:31.252 14:59:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:31.252 14:59:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:31.252 14:59:18 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:31.252 14:59:18 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:31.252 14:59:18 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:31.252 14:59:18 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:31.252 14:59:18 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:31.252 14:59:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.252 14:59:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.252 14:59:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.252 14:59:18 -- paths/export.sh@5 -- # export PATH 00:03:31.252 14:59:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.252 14:59:18 -- nvmf/common.sh@51 -- # : 0 00:03:31.252 14:59:18 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:31.252 14:59:18 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:31.252 14:59:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:31.252 14:59:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:31.252 14:59:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:31.252 14:59:18 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:31.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:31.252 14:59:18 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:31.252 14:59:18 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:31.252 14:59:18 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:31.252 14:59:18 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:31.252 14:59:18 -- spdk/autotest.sh@32 -- # uname -s 00:03:31.252 14:59:18 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:31.252 14:59:18 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:31.252 14:59:18 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:31.252 14:59:18 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:31.252 14:59:18 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:31.252 14:59:18 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:31.252 14:59:18 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:31.252 14:59:18 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:31.252 14:59:18 -- spdk/autotest.sh@48 -- # udevadm_pid=3017937 00:03:31.252 14:59:18 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:31.252 14:59:18 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:31.253 14:59:18 -- pm/common@17 -- # local monitor 00:03:31.253 14:59:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.253 14:59:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.253 14:59:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.253 14:59:18 -- pm/common@21 -- # date +%s 00:03:31.253 14:59:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.253 14:59:18 -- pm/common@21 -- # date +%s 00:03:31.253 14:59:18 -- pm/common@25 -- # sleep 1 00:03:31.253 14:59:18 -- pm/common@21 -- # date +%s 00:03:31.253 14:59:18 -- pm/common@21 -- # date +%s 00:03:31.253 14:59:18 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730123958 00:03:31.253 14:59:18 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730123958 00:03:31.253 14:59:18 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730123958 00:03:31.253 14:59:18 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730123958 00:03:31.253 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730123958_collect-cpu-load.pm.log 00:03:31.253 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730123958_collect-cpu-temp.pm.log 00:03:31.253 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730123958_collect-vmstat.pm.log 00:03:31.511 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730123958_collect-bmc-pm.bmc.pm.log 00:03:32.451 14:59:19 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:32.451 14:59:19 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:32.451 14:59:19 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:32.451 14:59:19 -- common/autotest_common.sh@10 -- # set +x 00:03:32.451 14:59:19 -- spdk/autotest.sh@59 -- # create_test_list 00:03:32.451 14:59:19 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:32.451 14:59:19 -- common/autotest_common.sh@10 -- # set +x 00:03:32.451 14:59:19 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:32.451 14:59:19 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:32.451 14:59:19 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:32.451 14:59:19 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:32.451 14:59:19 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:32.451 14:59:19 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:32.451 14:59:19 -- common/autotest_common.sh@1453 -- # uname 00:03:32.451 14:59:19 -- common/autotest_common.sh@1453 -- # '[' Linux = FreeBSD ']' 00:03:32.451 14:59:19 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:32.451 14:59:19 -- common/autotest_common.sh@1473 -- # uname 00:03:32.451 14:59:19 -- common/autotest_common.sh@1473 -- # [[ Linux = FreeBSD ]] 00:03:32.451 14:59:19 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:32.451 14:59:19 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:32.711 lcov: LCOV version 1.15 00:03:32.711 14:59:19 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:28.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:28.967 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:15.672 15:00:58 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:15.672 15:00:58 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:15.672 15:00:58 -- common/autotest_common.sh@10 -- # set +x 00:05:15.672 15:00:58 -- spdk/autotest.sh@78 -- # rm -f 00:05:15.672 15:00:58 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:15.673 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:05:15.673 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:05:15.673 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:05:15.673 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:05:15.673 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:05:15.673 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:05:15.673 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:05:15.673 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:05:15.673 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:05:15.673 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:05:15.673 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:05:15.673 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:05:15.673 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:05:15.673 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:05:15.673 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:05:15.673 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:05:15.673 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:05:15.673 15:01:00 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:15.673 15:01:00 -- common/autotest_common.sh@1653 -- # zoned_devs=() 00:05:15.673 15:01:00 -- common/autotest_common.sh@1653 -- # local -gA zoned_devs 00:05:15.673 15:01:00 -- common/autotest_common.sh@1654 -- # local nvme bdf 00:05:15.673 15:01:00 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:05:15.673 15:01:00 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme0n1 00:05:15.673 15:01:00 -- common/autotest_common.sh@1646 -- # local device=nvme0n1 00:05:15.673 15:01:00 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:15.673 15:01:00 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:05:15.673 15:01:00 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:15.673 15:01:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:15.673 15:01:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:15.673 15:01:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:15.673 15:01:00 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:15.673 15:01:00 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:15.673 No valid GPT data, bailing 00:05:15.673 15:01:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:15.673 15:01:01 -- scripts/common.sh@394 -- # pt= 00:05:15.673 15:01:01 -- scripts/common.sh@395 -- # return 1 00:05:15.673 15:01:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:15.673 1+0 records in 00:05:15.673 1+0 records out 00:05:15.673 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00379911 s, 276 MB/s 00:05:15.673 15:01:01 -- spdk/autotest.sh@105 -- # sync 00:05:15.673 15:01:01 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:15.673 15:01:01 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:15.673 15:01:01 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:17.050 15:01:03 -- spdk/autotest.sh@111 -- # uname -s 00:05:17.050 15:01:03 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:17.050 15:01:03 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:17.050 15:01:03 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:18.956 Hugepages 00:05:18.956 node hugesize free / total 00:05:18.956 node0 1048576kB 0 / 0 00:05:18.956 node0 2048kB 0 / 0 00:05:18.956 node1 1048576kB 0 / 0 00:05:18.956 node1 2048kB 0 / 0 00:05:18.956 00:05:18.956 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:18.956 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:18.956 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:18.956 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:18.956 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:18.956 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:18.956 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:18.956 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:18.956 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:18.956 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:18.956 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:18.956 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:18.956 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:18.956 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:18.956 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:18.956 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:18.956 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:18.956 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:18.956 15:01:05 -- spdk/autotest.sh@117 -- # uname -s 00:05:18.956 15:01:05 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:18.956 15:01:05 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:18.956 15:01:05 -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:20.865 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:20.865 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:20.865 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:20.865 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:20.865 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:20.865 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:20.865 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:20.865 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:20.865 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:20.865 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:20.865 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:20.865 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:20.865 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:20.865 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:20.865 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:20.865 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:21.805 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:05:22.065 15:01:08 -- common/autotest_common.sh@1513 -- # sleep 1 00:05:23.006 15:01:09 -- common/autotest_common.sh@1514 -- # bdfs=() 00:05:23.006 15:01:09 -- common/autotest_common.sh@1514 -- # local bdfs 00:05:23.006 15:01:09 -- common/autotest_common.sh@1516 -- # bdfs=($(get_nvme_bdfs)) 00:05:23.006 15:01:09 -- common/autotest_common.sh@1516 -- # get_nvme_bdfs 00:05:23.006 15:01:09 -- common/autotest_common.sh@1494 -- # bdfs=() 00:05:23.006 15:01:09 -- common/autotest_common.sh@1494 -- # local bdfs 00:05:23.006 15:01:09 -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:23.006 15:01:09 -- common/autotest_common.sh@1495 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:23.006 15:01:09 -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:05:23.006 15:01:09 -- common/autotest_common.sh@1496 -- # (( 1 == 0 )) 00:05:23.006 15:01:09 -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:82:00.0 00:05:23.006 15:01:09 -- common/autotest_common.sh@1518 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:24.421 Waiting for block devices as requested 00:05:24.681 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:05:24.681 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:24.941 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:24.941 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:25.202 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:25.202 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:25.202 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:25.202 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:25.462 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:25.462 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:25.462 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:25.723 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:25.723 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:25.723 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:25.723 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:25.984 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:25.984 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:25.984 15:01:12 -- common/autotest_common.sh@1520 -- # for bdf in "${bdfs[@]}" 00:05:25.984 15:01:12 -- common/autotest_common.sh@1521 -- # get_nvme_ctrlr_from_bdf 0000:82:00.0 00:05:25.984 15:01:12 -- common/autotest_common.sh@1483 -- # readlink -f /sys/class/nvme/nvme0 00:05:25.984 15:01:12 -- common/autotest_common.sh@1483 -- # grep 0000:82:00.0/nvme/nvme 00:05:25.984 15:01:12 -- common/autotest_common.sh@1483 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:05:25.984 15:01:12 -- common/autotest_common.sh@1484 -- # [[ -z /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 ]] 00:05:25.984 15:01:12 -- common/autotest_common.sh@1488 -- # basename /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:05:25.984 15:01:12 -- common/autotest_common.sh@1488 -- # printf '%s\n' nvme0 00:05:25.984 15:01:12 -- common/autotest_common.sh@1521 -- # nvme_ctrlr=/dev/nvme0 00:05:25.984 15:01:12 -- common/autotest_common.sh@1522 -- # [[ -z /dev/nvme0 ]] 00:05:25.984 15:01:12 -- common/autotest_common.sh@1527 -- # nvme id-ctrl /dev/nvme0 00:05:25.984 15:01:12 -- common/autotest_common.sh@1527 -- # grep oacs 00:05:25.984 15:01:12 -- common/autotest_common.sh@1527 -- # cut -d: -f2 00:05:25.984 15:01:12 -- common/autotest_common.sh@1527 -- # oacs=' 0xf' 00:05:25.984 15:01:12 -- common/autotest_common.sh@1528 -- # oacs_ns_manage=8 00:05:25.984 15:01:12 -- common/autotest_common.sh@1530 -- # [[ 8 -ne 0 ]] 00:05:25.984 15:01:12 -- common/autotest_common.sh@1536 -- # nvme id-ctrl /dev/nvme0 00:05:25.984 15:01:12 -- common/autotest_common.sh@1536 -- # grep unvmcap 00:05:25.984 15:01:12 -- common/autotest_common.sh@1536 -- # cut -d: -f2 00:05:26.244 15:01:12 -- common/autotest_common.sh@1536 -- # unvmcap=' 0' 00:05:26.244 15:01:12 -- common/autotest_common.sh@1537 -- # [[ 0 -eq 0 ]] 00:05:26.244 15:01:12 -- common/autotest_common.sh@1539 -- # continue 00:05:26.244 15:01:12 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:26.244 15:01:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:26.244 15:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:26.244 15:01:12 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:26.244 15:01:12 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:26.244 15:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:26.244 15:01:12 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:27.624 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:27.624 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:27.624 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:27.624 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:27.624 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:27.624 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:27.624 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:27.624 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:27.884 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:27.884 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:27.884 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:27.884 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:27.884 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:27.884 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:27.884 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:27.884 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:28.820 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:05:28.821 15:01:15 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:28.821 15:01:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:28.821 15:01:15 -- common/autotest_common.sh@10 -- # set +x 00:05:28.821 15:01:15 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:28.821 15:01:15 -- common/autotest_common.sh@1574 -- # mapfile -t bdfs 00:05:28.821 15:01:15 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs_by_id 0x0a54 00:05:28.821 15:01:15 -- common/autotest_common.sh@1559 -- # bdfs=() 00:05:28.821 15:01:15 -- common/autotest_common.sh@1559 -- # _bdfs=() 00:05:28.821 15:01:15 -- common/autotest_common.sh@1559 -- # local bdfs _bdfs 00:05:28.821 15:01:15 -- common/autotest_common.sh@1560 -- # _bdfs=($(get_nvme_bdfs)) 00:05:28.821 15:01:15 -- common/autotest_common.sh@1560 -- # get_nvme_bdfs 00:05:28.821 15:01:15 -- common/autotest_common.sh@1494 -- # bdfs=() 00:05:28.821 15:01:15 -- common/autotest_common.sh@1494 -- # local bdfs 00:05:28.821 15:01:15 -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:28.821 15:01:15 -- common/autotest_common.sh@1495 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:28.821 15:01:15 -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:05:29.079 15:01:15 -- common/autotest_common.sh@1496 -- # (( 1 == 0 )) 00:05:29.079 15:01:15 -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:82:00.0 00:05:29.079 15:01:15 -- common/autotest_common.sh@1561 -- # for bdf in "${_bdfs[@]}" 00:05:29.079 15:01:15 -- common/autotest_common.sh@1562 -- # cat /sys/bus/pci/devices/0000:82:00.0/device 00:05:29.079 15:01:15 -- common/autotest_common.sh@1562 -- # device=0x0a54 00:05:29.079 15:01:15 -- common/autotest_common.sh@1563 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:29.079 15:01:15 -- common/autotest_common.sh@1564 -- # bdfs+=($bdf) 00:05:29.079 15:01:15 -- common/autotest_common.sh@1568 -- # (( 1 > 0 )) 00:05:29.079 15:01:15 -- common/autotest_common.sh@1569 -- # printf '%s\n' 0000:82:00.0 00:05:29.079 15:01:15 -- common/autotest_common.sh@1575 -- # [[ -z 0000:82:00.0 ]] 00:05:29.079 15:01:15 -- common/autotest_common.sh@1580 -- # spdk_tgt_pid=3036636 00:05:29.079 15:01:15 -- common/autotest_common.sh@1579 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.079 15:01:15 -- common/autotest_common.sh@1581 -- # waitforlisten 3036636 00:05:29.079 15:01:15 -- common/autotest_common.sh@831 -- # '[' -z 3036636 ']' 00:05:29.079 15:01:15 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.079 15:01:15 -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:29.079 15:01:15 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.079 15:01:15 -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:29.079 15:01:15 -- common/autotest_common.sh@10 -- # set +x 00:05:29.079 [2024-10-28 15:01:15.864288] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:05:29.079 [2024-10-28 15:01:15.864474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3036636 ] 00:05:29.338 [2024-10-28 15:01:16.033765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.338 [2024-10-28 15:01:16.158615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.905 15:01:16 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:29.905 15:01:16 -- common/autotest_common.sh@864 -- # return 0 00:05:29.905 15:01:16 -- common/autotest_common.sh@1583 -- # bdf_id=0 00:05:29.905 15:01:16 -- common/autotest_common.sh@1584 -- # for bdf in "${bdfs[@]}" 00:05:29.905 15:01:16 -- common/autotest_common.sh@1585 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:82:00.0 00:05:33.187 nvme0n1 00:05:33.187 15:01:19 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:33.757 [2024-10-28 15:01:20.556736] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:33.757 [2024-10-28 15:01:20.556839] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:33.757 request: 00:05:33.757 { 00:05:33.757 "nvme_ctrlr_name": "nvme0", 00:05:33.757 "password": "test", 00:05:33.757 "method": "bdev_nvme_opal_revert", 00:05:33.757 "req_id": 1 00:05:33.757 } 00:05:33.757 Got JSON-RPC error response 00:05:33.757 response: 00:05:33.757 { 00:05:33.757 "code": -32603, 00:05:33.757 "message": "Internal error" 00:05:33.757 } 00:05:33.757 15:01:20 -- common/autotest_common.sh@1587 -- # true 00:05:33.757 15:01:20 -- common/autotest_common.sh@1588 -- # (( ++bdf_id )) 00:05:33.757 15:01:20 -- common/autotest_common.sh@1591 -- # killprocess 3036636 00:05:33.757 15:01:20 -- common/autotest_common.sh@950 -- # '[' -z 3036636 ']' 00:05:33.757 15:01:20 -- common/autotest_common.sh@954 -- # kill -0 3036636 00:05:33.757 15:01:20 -- common/autotest_common.sh@955 -- # uname 00:05:33.757 15:01:20 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.757 15:01:20 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3036636 00:05:34.018 15:01:20 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:34.018 15:01:20 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:34.018 15:01:20 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3036636' 00:05:34.018 killing process with pid 3036636 00:05:34.018 15:01:20 -- common/autotest_common.sh@969 -- # kill 3036636 00:05:34.018 15:01:20 -- common/autotest_common.sh@974 -- # wait 3036636 00:05:35.930 15:01:22 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:35.930 15:01:22 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:35.930 15:01:22 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:35.930 15:01:22 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:35.930 15:01:22 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:35.930 15:01:22 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:35.930 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:05:35.930 15:01:22 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:35.930 15:01:22 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:35.930 15:01:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.930 15:01:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.930 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:05:35.930 ************************************ 00:05:35.930 START TEST env 00:05:35.930 ************************************ 00:05:35.930 15:01:22 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:35.930 * Looking for test storage... 00:05:35.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:35.930 15:01:22 env -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:35.930 15:01:22 env -- common/autotest_common.sh@1689 -- # lcov --version 00:05:35.930 15:01:22 env -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:36.191 15:01:22 env -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:36.191 15:01:22 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.191 15:01:22 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.191 15:01:22 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.191 15:01:22 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.191 15:01:22 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.191 15:01:22 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.191 15:01:22 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.191 15:01:22 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.191 15:01:22 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.191 15:01:22 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.191 15:01:22 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.191 15:01:22 env -- scripts/common.sh@344 -- # case "$op" in 00:05:36.191 15:01:22 env -- scripts/common.sh@345 -- # : 1 00:05:36.191 15:01:22 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.191 15:01:22 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.191 15:01:22 env -- scripts/common.sh@365 -- # decimal 1 00:05:36.191 15:01:22 env -- scripts/common.sh@353 -- # local d=1 00:05:36.191 15:01:22 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.191 15:01:22 env -- scripts/common.sh@355 -- # echo 1 00:05:36.191 15:01:22 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.191 15:01:22 env -- scripts/common.sh@366 -- # decimal 2 00:05:36.191 15:01:22 env -- scripts/common.sh@353 -- # local d=2 00:05:36.191 15:01:22 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.191 15:01:22 env -- scripts/common.sh@355 -- # echo 2 00:05:36.191 15:01:22 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.191 15:01:22 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.191 15:01:22 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.191 15:01:22 env -- scripts/common.sh@368 -- # return 0 00:05:36.191 15:01:22 env -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.191 15:01:22 env -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:36.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.191 --rc genhtml_branch_coverage=1 00:05:36.191 --rc genhtml_function_coverage=1 00:05:36.191 --rc genhtml_legend=1 00:05:36.191 --rc geninfo_all_blocks=1 00:05:36.191 --rc geninfo_unexecuted_blocks=1 00:05:36.191 00:05:36.191 ' 00:05:36.191 15:01:22 env -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:36.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.191 --rc genhtml_branch_coverage=1 00:05:36.191 --rc genhtml_function_coverage=1 00:05:36.191 --rc genhtml_legend=1 00:05:36.191 --rc geninfo_all_blocks=1 00:05:36.191 --rc geninfo_unexecuted_blocks=1 00:05:36.191 00:05:36.191 ' 00:05:36.191 15:01:22 env -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:36.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.191 --rc genhtml_branch_coverage=1 00:05:36.191 --rc genhtml_function_coverage=1 00:05:36.191 --rc genhtml_legend=1 00:05:36.191 --rc geninfo_all_blocks=1 00:05:36.191 --rc geninfo_unexecuted_blocks=1 00:05:36.191 00:05:36.191 ' 00:05:36.191 15:01:22 env -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:36.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.191 --rc genhtml_branch_coverage=1 00:05:36.191 --rc genhtml_function_coverage=1 00:05:36.191 --rc genhtml_legend=1 00:05:36.191 --rc geninfo_all_blocks=1 00:05:36.191 --rc geninfo_unexecuted_blocks=1 00:05:36.191 00:05:36.191 ' 00:05:36.191 15:01:22 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:36.191 15:01:22 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.191 15:01:22 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.191 15:01:22 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.191 ************************************ 00:05:36.191 START TEST env_memory 00:05:36.191 ************************************ 00:05:36.191 15:01:23 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:36.191 00:05:36.191 00:05:36.191 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.191 http://cunit.sourceforge.net/ 00:05:36.191 00:05:36.191 00:05:36.191 Suite: memory 00:05:36.452 Test: alloc and free memory map ...[2024-10-28 15:01:23.084707] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:36.452 passed 00:05:36.452 Test: mem map translation ...[2024-10-28 15:01:23.144367] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:36.452 [2024-10-28 15:01:23.144430] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:36.452 [2024-10-28 15:01:23.144547] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:36.452 [2024-10-28 15:01:23.144583] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:36.452 passed 00:05:36.452 Test: mem map registration ...[2024-10-28 15:01:23.267535] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:36.452 [2024-10-28 15:01:23.267593] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:36.452 passed 00:05:36.713 Test: mem map adjacent registrations ...passed 00:05:36.713 00:05:36.713 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.713 suites 1 1 n/a 0 0 00:05:36.713 tests 4 4 4 0 0 00:05:36.713 asserts 152 152 152 0 n/a 00:05:36.713 00:05:36.713 Elapsed time = 0.397 seconds 00:05:36.713 00:05:36.713 real 0m0.414s 00:05:36.713 user 0m0.399s 00:05:36.713 sys 0m0.012s 00:05:36.713 15:01:23 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.713 15:01:23 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:36.713 ************************************ 00:05:36.713 END TEST env_memory 00:05:36.713 ************************************ 00:05:36.713 15:01:23 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:36.713 15:01:23 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.713 15:01:23 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.713 15:01:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.713 ************************************ 00:05:36.713 START TEST env_vtophys 00:05:36.713 ************************************ 00:05:36.713 15:01:23 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:36.713 EAL: lib.eal log level changed from notice to debug 00:05:36.713 EAL: Detected lcore 0 as core 0 on socket 0 00:05:36.713 EAL: Detected lcore 1 as core 1 on socket 0 00:05:36.713 EAL: Detected lcore 2 as core 2 on socket 0 00:05:36.713 EAL: Detected lcore 3 as core 3 on socket 0 00:05:36.713 EAL: Detected lcore 4 as core 4 on socket 0 00:05:36.713 EAL: Detected lcore 5 as core 5 on socket 0 00:05:36.713 EAL: Detected lcore 6 as core 8 on socket 0 00:05:36.713 EAL: Detected lcore 7 as core 9 on socket 0 00:05:36.713 EAL: Detected lcore 8 as core 10 on socket 0 00:05:36.713 EAL: Detected lcore 9 as core 11 on socket 0 00:05:36.713 EAL: Detected lcore 10 as core 12 on socket 0 00:05:36.713 EAL: Detected lcore 11 as core 13 on socket 0 00:05:36.713 EAL: Detected lcore 12 as core 0 on socket 1 00:05:36.713 EAL: Detected lcore 13 as core 1 on socket 1 00:05:36.713 EAL: Detected lcore 14 as core 2 on socket 1 00:05:36.713 EAL: Detected lcore 15 as core 3 on socket 1 00:05:36.713 EAL: Detected lcore 16 as core 4 on socket 1 00:05:36.713 EAL: Detected lcore 17 as core 5 on socket 1 00:05:36.713 EAL: Detected lcore 18 as core 8 on socket 1 00:05:36.713 EAL: Detected lcore 19 as core 9 on socket 1 00:05:36.713 EAL: Detected lcore 20 as core 10 on socket 1 00:05:36.713 EAL: Detected lcore 21 as core 11 on socket 1 00:05:36.713 EAL: Detected lcore 22 as core 12 on socket 1 00:05:36.713 EAL: Detected lcore 23 as core 13 on socket 1 00:05:36.713 EAL: Detected lcore 24 as core 0 on socket 0 00:05:36.713 EAL: Detected lcore 25 as core 1 on socket 0 00:05:36.713 EAL: Detected lcore 26 as core 2 on socket 0 00:05:36.713 EAL: Detected lcore 27 as core 3 on socket 0 00:05:36.713 EAL: Detected lcore 28 as core 4 on socket 0 00:05:36.713 EAL: Detected lcore 29 as core 5 on socket 0 00:05:36.713 EAL: Detected lcore 30 as core 8 on socket 0 00:05:36.713 EAL: Detected lcore 31 as core 9 on socket 0 00:05:36.713 EAL: Detected lcore 32 as core 10 on socket 0 00:05:36.713 EAL: Detected lcore 33 as core 11 on socket 0 00:05:36.713 EAL: Detected lcore 34 as core 12 on socket 0 00:05:36.713 EAL: Detected lcore 35 as core 13 on socket 0 00:05:36.713 EAL: Detected lcore 36 as core 0 on socket 1 00:05:36.713 EAL: Detected lcore 37 as core 1 on socket 1 00:05:36.713 EAL: Detected lcore 38 as core 2 on socket 1 00:05:36.713 EAL: Detected lcore 39 as core 3 on socket 1 00:05:36.713 EAL: Detected lcore 40 as core 4 on socket 1 00:05:36.713 EAL: Detected lcore 41 as core 5 on socket 1 00:05:36.713 EAL: Detected lcore 42 as core 8 on socket 1 00:05:36.713 EAL: Detected lcore 43 as core 9 on socket 1 00:05:36.713 EAL: Detected lcore 44 as core 10 on socket 1 00:05:36.713 EAL: Detected lcore 45 as core 11 on socket 1 00:05:36.713 EAL: Detected lcore 46 as core 12 on socket 1 00:05:36.713 EAL: Detected lcore 47 as core 13 on socket 1 00:05:36.713 EAL: Maximum logical cores by configuration: 128 00:05:36.713 EAL: Detected CPU lcores: 48 00:05:36.713 EAL: Detected NUMA nodes: 2 00:05:36.713 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:36.713 EAL: Detected shared linkage of DPDK 00:05:36.713 EAL: No shared files mode enabled, IPC will be disabled 00:05:36.713 EAL: Bus pci wants IOVA as 'DC' 00:05:36.713 EAL: Buses did not request a specific IOVA mode. 00:05:36.713 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:36.713 EAL: Selected IOVA mode 'VA' 00:05:36.713 EAL: Probing VFIO support... 00:05:36.713 EAL: IOMMU type 1 (Type 1) is supported 00:05:36.713 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:36.713 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:36.713 EAL: VFIO support initialized 00:05:36.713 EAL: Ask a virtual area of 0x2e000 bytes 00:05:36.713 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:36.713 EAL: Setting up physically contiguous memory... 00:05:36.713 EAL: Setting maximum number of open files to 524288 00:05:36.713 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:36.713 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:36.713 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:36.713 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.713 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:36.713 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.713 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.713 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:36.713 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:36.713 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.713 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:36.713 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.713 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.713 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:36.713 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:36.713 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.713 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:36.713 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.713 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.713 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:36.713 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:36.713 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.713 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:36.973 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.973 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.973 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:36.973 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:36.973 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:36.973 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.973 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:36.973 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:36.973 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.973 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:36.973 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:36.973 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.973 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:36.973 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:36.973 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.973 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:36.973 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:36.973 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.973 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:36.973 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:36.973 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.973 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:36.973 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:36.973 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.973 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:36.973 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:36.973 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.973 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:36.973 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:36.973 EAL: Hugepages will be freed exactly as allocated. 00:05:36.973 EAL: No shared files mode enabled, IPC is disabled 00:05:36.973 EAL: No shared files mode enabled, IPC is disabled 00:05:36.973 EAL: TSC frequency is ~2700000 KHz 00:05:36.973 EAL: Main lcore 0 is ready (tid=7f82f78b7a00;cpuset=[0]) 00:05:36.973 EAL: Trying to obtain current memory policy. 00:05:36.973 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.973 EAL: Restoring previous memory policy: 0 00:05:36.973 EAL: request: mp_malloc_sync 00:05:36.973 EAL: No shared files mode enabled, IPC is disabled 00:05:36.973 EAL: Heap on socket 0 was expanded by 2MB 00:05:36.973 EAL: No shared files mode enabled, IPC is disabled 00:05:36.973 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:36.973 EAL: Mem event callback 'spdk:(nil)' registered 00:05:36.973 00:05:36.973 00:05:36.973 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.973 http://cunit.sourceforge.net/ 00:05:36.973 00:05:36.973 00:05:36.973 Suite: components_suite 00:05:36.973 Test: vtophys_malloc_test ...passed 00:05:36.973 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:36.973 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.973 EAL: Restoring previous memory policy: 4 00:05:36.973 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.973 EAL: request: mp_malloc_sync 00:05:36.974 EAL: No shared files mode enabled, IPC is disabled 00:05:36.974 EAL: Heap on socket 0 was expanded by 4MB 00:05:36.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.974 EAL: request: mp_malloc_sync 00:05:36.974 EAL: No shared files mode enabled, IPC is disabled 00:05:36.974 EAL: Heap on socket 0 was shrunk by 4MB 00:05:36.974 EAL: Trying to obtain current memory policy. 00:05:36.974 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.974 EAL: Restoring previous memory policy: 4 00:05:36.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.974 EAL: request: mp_malloc_sync 00:05:36.974 EAL: No shared files mode enabled, IPC is disabled 00:05:36.974 EAL: Heap on socket 0 was expanded by 6MB 00:05:36.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.974 EAL: request: mp_malloc_sync 00:05:36.974 EAL: No shared files mode enabled, IPC is disabled 00:05:36.974 EAL: Heap on socket 0 was shrunk by 6MB 00:05:36.974 EAL: Trying to obtain current memory policy. 00:05:36.974 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.974 EAL: Restoring previous memory policy: 4 00:05:36.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.974 EAL: request: mp_malloc_sync 00:05:36.974 EAL: No shared files mode enabled, IPC is disabled 00:05:36.974 EAL: Heap on socket 0 was expanded by 10MB 00:05:36.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.974 EAL: request: mp_malloc_sync 00:05:36.974 EAL: No shared files mode enabled, IPC is disabled 00:05:36.974 EAL: Heap on socket 0 was shrunk by 10MB 00:05:36.974 EAL: Trying to obtain current memory policy. 00:05:36.974 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.974 EAL: Restoring previous memory policy: 4 00:05:36.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.974 EAL: request: mp_malloc_sync 00:05:36.974 EAL: No shared files mode enabled, IPC is disabled 00:05:36.974 EAL: Heap on socket 0 was expanded by 18MB 00:05:36.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.974 EAL: request: mp_malloc_sync 00:05:36.974 EAL: No shared files mode enabled, IPC is disabled 00:05:36.974 EAL: Heap on socket 0 was shrunk by 18MB 00:05:36.974 EAL: Trying to obtain current memory policy. 00:05:36.974 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.974 EAL: Restoring previous memory policy: 4 00:05:36.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.974 EAL: request: mp_malloc_sync 00:05:36.974 EAL: No shared files mode enabled, IPC is disabled 00:05:36.974 EAL: Heap on socket 0 was expanded by 34MB 00:05:36.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.974 EAL: request: mp_malloc_sync 00:05:36.974 EAL: No shared files mode enabled, IPC is disabled 00:05:36.974 EAL: Heap on socket 0 was shrunk by 34MB 00:05:36.974 EAL: Trying to obtain current memory policy. 00:05:36.974 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.974 EAL: Restoring previous memory policy: 4 00:05:36.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.974 EAL: request: mp_malloc_sync 00:05:36.974 EAL: No shared files mode enabled, IPC is disabled 00:05:36.974 EAL: Heap on socket 0 was expanded by 66MB 00:05:36.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.974 EAL: request: mp_malloc_sync 00:05:36.974 EAL: No shared files mode enabled, IPC is disabled 00:05:36.974 EAL: Heap on socket 0 was shrunk by 66MB 00:05:36.974 EAL: Trying to obtain current memory policy. 00:05:36.974 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.974 EAL: Restoring previous memory policy: 4 00:05:36.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.974 EAL: request: mp_malloc_sync 00:05:36.974 EAL: No shared files mode enabled, IPC is disabled 00:05:36.974 EAL: Heap on socket 0 was expanded by 130MB 00:05:37.232 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.232 EAL: request: mp_malloc_sync 00:05:37.232 EAL: No shared files mode enabled, IPC is disabled 00:05:37.232 EAL: Heap on socket 0 was shrunk by 130MB 00:05:37.232 EAL: Trying to obtain current memory policy. 00:05:37.232 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.232 EAL: Restoring previous memory policy: 4 00:05:37.232 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.232 EAL: request: mp_malloc_sync 00:05:37.232 EAL: No shared files mode enabled, IPC is disabled 00:05:37.232 EAL: Heap on socket 0 was expanded by 258MB 00:05:37.232 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.492 EAL: request: mp_malloc_sync 00:05:37.492 EAL: No shared files mode enabled, IPC is disabled 00:05:37.492 EAL: Heap on socket 0 was shrunk by 258MB 00:05:37.492 EAL: Trying to obtain current memory policy. 00:05:37.492 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.753 EAL: Restoring previous memory policy: 4 00:05:37.753 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.753 EAL: request: mp_malloc_sync 00:05:37.753 EAL: No shared files mode enabled, IPC is disabled 00:05:37.753 EAL: Heap on socket 0 was expanded by 514MB 00:05:37.753 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.753 EAL: request: mp_malloc_sync 00:05:37.753 EAL: No shared files mode enabled, IPC is disabled 00:05:37.753 EAL: Heap on socket 0 was shrunk by 514MB 00:05:37.753 EAL: Trying to obtain current memory policy. 00:05:37.753 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.323 EAL: Restoring previous memory policy: 4 00:05:38.323 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.323 EAL: request: mp_malloc_sync 00:05:38.323 EAL: No shared files mode enabled, IPC is disabled 00:05:38.323 EAL: Heap on socket 0 was expanded by 1026MB 00:05:38.580 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.841 EAL: request: mp_malloc_sync 00:05:38.841 EAL: No shared files mode enabled, IPC is disabled 00:05:38.841 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:38.841 passed 00:05:38.841 00:05:38.841 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.841 suites 1 1 n/a 0 0 00:05:38.841 tests 2 2 2 0 0 00:05:38.841 asserts 497 497 497 0 n/a 00:05:38.841 00:05:38.841 Elapsed time = 1.816 seconds 00:05:38.841 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.841 EAL: request: mp_malloc_sync 00:05:38.841 EAL: No shared files mode enabled, IPC is disabled 00:05:38.841 EAL: Heap on socket 0 was shrunk by 2MB 00:05:38.841 EAL: No shared files mode enabled, IPC is disabled 00:05:38.841 EAL: No shared files mode enabled, IPC is disabled 00:05:38.841 EAL: No shared files mode enabled, IPC is disabled 00:05:38.841 00:05:38.841 real 0m2.071s 00:05:38.841 user 0m1.011s 00:05:38.841 sys 0m1.006s 00:05:38.841 15:01:25 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.841 15:01:25 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:38.841 ************************************ 00:05:38.841 END TEST env_vtophys 00:05:38.841 ************************************ 00:05:38.841 15:01:25 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:38.841 15:01:25 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.841 15:01:25 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.841 15:01:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.841 ************************************ 00:05:38.841 START TEST env_pci 00:05:38.841 ************************************ 00:05:38.841 15:01:25 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:38.841 00:05:38.841 00:05:38.841 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.841 http://cunit.sourceforge.net/ 00:05:38.841 00:05:38.841 00:05:38.841 Suite: pci 00:05:38.841 Test: pci_hook ...[2024-10-28 15:01:25.655774] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3037796 has claimed it 00:05:39.102 EAL: Cannot find device (10000:00:01.0) 00:05:39.102 EAL: Failed to attach device on primary process 00:05:39.102 passed 00:05:39.102 00:05:39.102 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.102 suites 1 1 n/a 0 0 00:05:39.102 tests 1 1 1 0 0 00:05:39.102 asserts 25 25 25 0 n/a 00:05:39.102 00:05:39.102 Elapsed time = 0.049 seconds 00:05:39.102 00:05:39.102 real 0m0.076s 00:05:39.102 user 0m0.025s 00:05:39.102 sys 0m0.050s 00:05:39.102 15:01:25 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.102 15:01:25 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:39.102 ************************************ 00:05:39.102 END TEST env_pci 00:05:39.102 ************************************ 00:05:39.102 15:01:25 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:39.102 15:01:25 env -- env/env.sh@15 -- # uname 00:05:39.102 15:01:25 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:39.102 15:01:25 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:39.102 15:01:25 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:39.102 15:01:25 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:39.102 15:01:25 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.102 15:01:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.102 ************************************ 00:05:39.102 START TEST env_dpdk_post_init 00:05:39.102 ************************************ 00:05:39.102 15:01:25 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:39.102 EAL: Detected CPU lcores: 48 00:05:39.102 EAL: Detected NUMA nodes: 2 00:05:39.102 EAL: Detected shared linkage of DPDK 00:05:39.102 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:39.102 EAL: Selected IOVA mode 'VA' 00:05:39.102 EAL: VFIO support initialized 00:05:39.102 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:39.362 EAL: Using IOMMU type 1 (Type 1) 00:05:39.362 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:39.362 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:39.362 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:39.362 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:39.362 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:39.362 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:39.362 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:39.362 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:39.362 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:39.362 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:39.362 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:39.362 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:39.362 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:39.622 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:39.622 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:39.622 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:40.193 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:82:00.0 (socket 1) 00:05:43.488 EAL: Releasing PCI mapped resource for 0000:82:00.0 00:05:43.488 EAL: Calling pci_unmap_resource for 0000:82:00.0 at 0x202001040000 00:05:43.488 Starting DPDK initialization... 00:05:43.488 Starting SPDK post initialization... 00:05:43.488 SPDK NVMe probe 00:05:43.488 Attaching to 0000:82:00.0 00:05:43.488 Attached to 0000:82:00.0 00:05:43.488 Cleaning up... 00:05:43.748 00:05:43.748 real 0m4.564s 00:05:43.748 user 0m3.095s 00:05:43.748 sys 0m0.521s 00:05:43.748 15:01:30 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.748 15:01:30 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:43.748 ************************************ 00:05:43.748 END TEST env_dpdk_post_init 00:05:43.748 ************************************ 00:05:43.748 15:01:30 env -- env/env.sh@26 -- # uname 00:05:43.748 15:01:30 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:43.748 15:01:30 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:43.748 15:01:30 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.748 15:01:30 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.748 15:01:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.748 ************************************ 00:05:43.748 START TEST env_mem_callbacks 00:05:43.748 ************************************ 00:05:43.748 15:01:30 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:43.748 EAL: Detected CPU lcores: 48 00:05:43.748 EAL: Detected NUMA nodes: 2 00:05:43.748 EAL: Detected shared linkage of DPDK 00:05:43.748 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:43.748 EAL: Selected IOVA mode 'VA' 00:05:43.748 EAL: VFIO support initialized 00:05:43.748 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:43.748 00:05:43.748 00:05:43.748 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.748 http://cunit.sourceforge.net/ 00:05:43.748 00:05:43.748 00:05:43.748 Suite: memory 00:05:43.748 Test: test ... 00:05:43.748 register 0x200000200000 2097152 00:05:43.748 malloc 3145728 00:05:43.748 register 0x200000400000 4194304 00:05:43.748 buf 0x200000500000 len 3145728 PASSED 00:05:43.748 malloc 64 00:05:43.748 buf 0x2000004fff40 len 64 PASSED 00:05:43.748 malloc 4194304 00:05:43.748 register 0x200000800000 6291456 00:05:43.748 buf 0x200000a00000 len 4194304 PASSED 00:05:43.748 free 0x200000500000 3145728 00:05:43.748 free 0x2000004fff40 64 00:05:43.748 unregister 0x200000400000 4194304 PASSED 00:05:43.748 free 0x200000a00000 4194304 00:05:43.748 unregister 0x200000800000 6291456 PASSED 00:05:43.748 malloc 8388608 00:05:43.748 register 0x200000400000 10485760 00:05:43.748 buf 0x200000600000 len 8388608 PASSED 00:05:43.748 free 0x200000600000 8388608 00:05:43.748 unregister 0x200000400000 10485760 PASSED 00:05:43.748 passed 00:05:43.748 00:05:43.748 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.748 suites 1 1 n/a 0 0 00:05:43.748 tests 1 1 1 0 0 00:05:43.748 asserts 15 15 15 0 n/a 00:05:43.748 00:05:43.748 Elapsed time = 0.011 seconds 00:05:43.748 00:05:43.748 real 0m0.064s 00:05:43.748 user 0m0.021s 00:05:43.748 sys 0m0.043s 00:05:43.748 15:01:30 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.748 15:01:30 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:43.748 ************************************ 00:05:43.748 END TEST env_mem_callbacks 00:05:43.748 ************************************ 00:05:43.748 00:05:43.748 real 0m7.828s 00:05:43.748 user 0m4.891s 00:05:43.748 sys 0m1.962s 00:05:43.748 15:01:30 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.748 15:01:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.748 ************************************ 00:05:43.748 END TEST env 00:05:43.748 ************************************ 00:05:43.748 15:01:30 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:43.748 15:01:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.748 15:01:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.748 15:01:30 -- common/autotest_common.sh@10 -- # set +x 00:05:43.748 ************************************ 00:05:43.748 START TEST rpc 00:05:43.748 ************************************ 00:05:43.748 15:01:30 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:44.009 * Looking for test storage... 00:05:44.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:44.009 15:01:30 rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:44.009 15:01:30 rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:05:44.009 15:01:30 rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:44.009 15:01:30 rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:44.009 15:01:30 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.009 15:01:30 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.009 15:01:30 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.009 15:01:30 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.009 15:01:30 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.009 15:01:30 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.009 15:01:30 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.009 15:01:30 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.009 15:01:30 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.009 15:01:30 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.009 15:01:30 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.009 15:01:30 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:44.009 15:01:30 rpc -- scripts/common.sh@345 -- # : 1 00:05:44.009 15:01:30 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.009 15:01:30 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.009 15:01:30 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:44.009 15:01:30 rpc -- scripts/common.sh@353 -- # local d=1 00:05:44.009 15:01:30 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.009 15:01:30 rpc -- scripts/common.sh@355 -- # echo 1 00:05:44.009 15:01:30 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.009 15:01:30 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:44.009 15:01:30 rpc -- scripts/common.sh@353 -- # local d=2 00:05:44.009 15:01:30 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.009 15:01:30 rpc -- scripts/common.sh@355 -- # echo 2 00:05:44.009 15:01:30 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.009 15:01:30 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.009 15:01:30 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.009 15:01:30 rpc -- scripts/common.sh@368 -- # return 0 00:05:44.009 15:01:30 rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.009 15:01:30 rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:44.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.009 --rc genhtml_branch_coverage=1 00:05:44.009 --rc genhtml_function_coverage=1 00:05:44.009 --rc genhtml_legend=1 00:05:44.009 --rc geninfo_all_blocks=1 00:05:44.009 --rc geninfo_unexecuted_blocks=1 00:05:44.009 00:05:44.009 ' 00:05:44.009 15:01:30 rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:44.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.009 --rc genhtml_branch_coverage=1 00:05:44.009 --rc genhtml_function_coverage=1 00:05:44.009 --rc genhtml_legend=1 00:05:44.009 --rc geninfo_all_blocks=1 00:05:44.009 --rc geninfo_unexecuted_blocks=1 00:05:44.009 00:05:44.009 ' 00:05:44.009 15:01:30 rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:44.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.009 --rc genhtml_branch_coverage=1 00:05:44.009 --rc genhtml_function_coverage=1 00:05:44.009 --rc genhtml_legend=1 00:05:44.009 --rc geninfo_all_blocks=1 00:05:44.009 --rc geninfo_unexecuted_blocks=1 00:05:44.009 00:05:44.009 ' 00:05:44.009 15:01:30 rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:44.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.009 --rc genhtml_branch_coverage=1 00:05:44.009 --rc genhtml_function_coverage=1 00:05:44.009 --rc genhtml_legend=1 00:05:44.009 --rc geninfo_all_blocks=1 00:05:44.009 --rc geninfo_unexecuted_blocks=1 00:05:44.009 00:05:44.009 ' 00:05:44.009 15:01:30 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3038496 00:05:44.009 15:01:30 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:44.009 15:01:30 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.009 15:01:30 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3038496 00:05:44.009 15:01:30 rpc -- common/autotest_common.sh@831 -- # '[' -z 3038496 ']' 00:05:44.009 15:01:30 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.009 15:01:30 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.009 15:01:30 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.009 15:01:30 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.009 15:01:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.270 [2024-10-28 15:01:30.962114] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:05:44.270 [2024-10-28 15:01:30.962291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3038496 ] 00:05:44.270 [2024-10-28 15:01:31.122345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.531 [2024-10-28 15:01:31.240451] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:44.531 [2024-10-28 15:01:31.240569] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3038496' to capture a snapshot of events at runtime. 00:05:44.531 [2024-10-28 15:01:31.240605] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:44.531 [2024-10-28 15:01:31.240636] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:44.531 [2024-10-28 15:01:31.240680] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3038496 for offline analysis/debug. 00:05:44.531 [2024-10-28 15:01:31.241946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.473 15:01:32 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.473 15:01:32 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:45.473 15:01:32 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:45.473 15:01:32 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:45.473 15:01:32 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:45.473 15:01:32 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:45.473 15:01:32 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.473 15:01:32 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.474 15:01:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.474 ************************************ 00:05:45.474 START TEST rpc_integrity 00:05:45.474 ************************************ 00:05:45.474 15:01:32 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:45.474 15:01:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:45.474 15:01:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.474 15:01:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.474 15:01:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.474 15:01:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:45.474 15:01:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:45.474 15:01:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:45.474 15:01:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:45.474 15:01:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.474 15:01:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.474 15:01:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.474 15:01:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:45.474 15:01:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:45.474 15:01:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.474 15:01:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.474 15:01:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.474 15:01:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:45.474 { 00:05:45.474 "name": "Malloc0", 00:05:45.474 "aliases": [ 00:05:45.474 "525b2f1c-cf51-441f-bc28-4080a642bc52" 00:05:45.474 ], 00:05:45.474 "product_name": "Malloc disk", 00:05:45.474 "block_size": 512, 00:05:45.474 "num_blocks": 16384, 00:05:45.474 "uuid": "525b2f1c-cf51-441f-bc28-4080a642bc52", 00:05:45.474 "assigned_rate_limits": { 00:05:45.474 "rw_ios_per_sec": 0, 00:05:45.474 "rw_mbytes_per_sec": 0, 00:05:45.474 "r_mbytes_per_sec": 0, 00:05:45.474 "w_mbytes_per_sec": 0 00:05:45.474 }, 00:05:45.474 "claimed": false, 00:05:45.474 "zoned": false, 00:05:45.474 "supported_io_types": { 00:05:45.474 "read": true, 00:05:45.474 "write": true, 00:05:45.474 "unmap": true, 00:05:45.474 "flush": true, 00:05:45.474 "reset": true, 00:05:45.474 "nvme_admin": false, 00:05:45.474 "nvme_io": false, 00:05:45.474 "nvme_io_md": false, 00:05:45.474 "write_zeroes": true, 00:05:45.474 "zcopy": true, 00:05:45.474 "get_zone_info": false, 00:05:45.474 "zone_management": false, 00:05:45.474 "zone_append": false, 00:05:45.474 "compare": false, 00:05:45.474 "compare_and_write": false, 00:05:45.474 "abort": true, 00:05:45.474 "seek_hole": false, 00:05:45.474 "seek_data": false, 00:05:45.474 "copy": true, 00:05:45.474 "nvme_iov_md": false 00:05:45.474 }, 00:05:45.474 "memory_domains": [ 00:05:45.474 { 00:05:45.474 "dma_device_id": "system", 00:05:45.474 "dma_device_type": 1 00:05:45.474 }, 00:05:45.474 { 00:05:45.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.474 "dma_device_type": 2 00:05:45.474 } 00:05:45.474 ], 00:05:45.474 "driver_specific": {} 00:05:45.474 } 00:05:45.474 ]' 00:05:45.474 15:01:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:45.474 15:01:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:45.474 15:01:32 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:45.474 15:01:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.474 15:01:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.474 [2024-10-28 15:01:32.231322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:45.474 [2024-10-28 15:01:32.231420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:45.474 [2024-10-28 15:01:32.231473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14e5a40 00:05:45.474 [2024-10-28 15:01:32.231508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:45.474 [2024-10-28 15:01:32.234311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:45.474 [2024-10-28 15:01:32.234376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:45.474 Passthru0 00:05:45.474 15:01:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.474 15:01:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:45.474 15:01:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.474 15:01:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.474 15:01:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.474 15:01:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:45.474 { 00:05:45.474 "name": "Malloc0", 00:05:45.474 "aliases": [ 00:05:45.474 "525b2f1c-cf51-441f-bc28-4080a642bc52" 00:05:45.474 ], 00:05:45.474 "product_name": "Malloc disk", 00:05:45.474 "block_size": 512, 00:05:45.474 "num_blocks": 16384, 00:05:45.474 "uuid": "525b2f1c-cf51-441f-bc28-4080a642bc52", 00:05:45.474 "assigned_rate_limits": { 00:05:45.474 "rw_ios_per_sec": 0, 00:05:45.474 "rw_mbytes_per_sec": 0, 00:05:45.474 "r_mbytes_per_sec": 0, 00:05:45.474 "w_mbytes_per_sec": 0 00:05:45.474 }, 00:05:45.474 "claimed": true, 00:05:45.474 "claim_type": "exclusive_write", 00:05:45.474 "zoned": false, 00:05:45.474 "supported_io_types": { 00:05:45.474 "read": true, 00:05:45.474 "write": true, 00:05:45.474 "unmap": true, 00:05:45.474 "flush": true, 00:05:45.474 "reset": true, 00:05:45.474 "nvme_admin": false, 00:05:45.474 "nvme_io": false, 00:05:45.474 "nvme_io_md": false, 00:05:45.474 "write_zeroes": true, 00:05:45.474 "zcopy": true, 00:05:45.474 "get_zone_info": false, 00:05:45.474 "zone_management": false, 00:05:45.474 "zone_append": false, 00:05:45.474 "compare": false, 00:05:45.474 "compare_and_write": false, 00:05:45.474 "abort": true, 00:05:45.474 "seek_hole": false, 00:05:45.474 "seek_data": false, 00:05:45.474 "copy": true, 00:05:45.474 "nvme_iov_md": false 00:05:45.474 }, 00:05:45.474 "memory_domains": [ 00:05:45.474 { 00:05:45.474 "dma_device_id": "system", 00:05:45.474 "dma_device_type": 1 00:05:45.474 }, 00:05:45.474 { 00:05:45.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.474 "dma_device_type": 2 00:05:45.474 } 00:05:45.474 ], 00:05:45.474 "driver_specific": {} 00:05:45.474 }, 00:05:45.474 { 00:05:45.474 "name": "Passthru0", 00:05:45.474 "aliases": [ 00:05:45.474 "1be36866-7a51-5f7d-b35f-48ad76e7af49" 00:05:45.474 ], 00:05:45.474 "product_name": "passthru", 00:05:45.474 "block_size": 512, 00:05:45.474 "num_blocks": 16384, 00:05:45.474 "uuid": "1be36866-7a51-5f7d-b35f-48ad76e7af49", 00:05:45.474 "assigned_rate_limits": { 00:05:45.474 "rw_ios_per_sec": 0, 00:05:45.474 "rw_mbytes_per_sec": 0, 00:05:45.474 "r_mbytes_per_sec": 0, 00:05:45.474 "w_mbytes_per_sec": 0 00:05:45.474 }, 00:05:45.474 "claimed": false, 00:05:45.474 "zoned": false, 00:05:45.474 "supported_io_types": { 00:05:45.474 "read": true, 00:05:45.474 "write": true, 00:05:45.474 "unmap": true, 00:05:45.474 "flush": true, 00:05:45.474 "reset": true, 00:05:45.474 "nvme_admin": false, 00:05:45.474 "nvme_io": false, 00:05:45.474 "nvme_io_md": false, 00:05:45.474 "write_zeroes": true, 00:05:45.474 "zcopy": true, 00:05:45.474 "get_zone_info": false, 00:05:45.474 "zone_management": false, 00:05:45.474 "zone_append": false, 00:05:45.474 "compare": false, 00:05:45.474 "compare_and_write": false, 00:05:45.474 "abort": true, 00:05:45.474 "seek_hole": false, 00:05:45.474 "seek_data": false, 00:05:45.474 "copy": true, 00:05:45.474 "nvme_iov_md": false 00:05:45.474 }, 00:05:45.474 "memory_domains": [ 00:05:45.474 { 00:05:45.474 "dma_device_id": "system", 00:05:45.474 "dma_device_type": 1 00:05:45.474 }, 00:05:45.474 { 00:05:45.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.474 "dma_device_type": 2 00:05:45.474 } 00:05:45.474 ], 00:05:45.474 "driver_specific": { 00:05:45.474 "passthru": { 00:05:45.474 "name": "Passthru0", 00:05:45.474 "base_bdev_name": "Malloc0" 00:05:45.474 } 00:05:45.474 } 00:05:45.474 } 00:05:45.474 ]' 00:05:45.474 15:01:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:45.736 15:01:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:45.736 15:01:32 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:45.736 15:01:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.736 15:01:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.736 15:01:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.736 15:01:32 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:45.736 15:01:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.736 15:01:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.736 15:01:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.736 15:01:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:45.736 15:01:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.736 15:01:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.736 15:01:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.736 15:01:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:45.736 15:01:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:45.736 15:01:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:45.736 00:05:45.736 real 0m0.422s 00:05:45.736 user 0m0.312s 00:05:45.736 sys 0m0.047s 00:05:45.736 15:01:32 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.736 15:01:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.736 ************************************ 00:05:45.736 END TEST rpc_integrity 00:05:45.736 ************************************ 00:05:45.736 15:01:32 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:45.736 15:01:32 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.736 15:01:32 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.736 15:01:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.736 ************************************ 00:05:45.736 START TEST rpc_plugins 00:05:45.736 ************************************ 00:05:45.736 15:01:32 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:45.736 15:01:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:45.736 15:01:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.736 15:01:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.736 15:01:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.736 15:01:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:45.736 15:01:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:45.736 15:01:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.736 15:01:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.736 15:01:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.736 15:01:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:45.736 { 00:05:45.736 "name": "Malloc1", 00:05:45.736 "aliases": [ 00:05:45.736 "4bdf8884-f4c7-450e-a70e-a845ded2ecc1" 00:05:45.736 ], 00:05:45.736 "product_name": "Malloc disk", 00:05:45.736 "block_size": 4096, 00:05:45.736 "num_blocks": 256, 00:05:45.736 "uuid": "4bdf8884-f4c7-450e-a70e-a845ded2ecc1", 00:05:45.736 "assigned_rate_limits": { 00:05:45.736 "rw_ios_per_sec": 0, 00:05:45.736 "rw_mbytes_per_sec": 0, 00:05:45.736 "r_mbytes_per_sec": 0, 00:05:45.736 "w_mbytes_per_sec": 0 00:05:45.736 }, 00:05:45.736 "claimed": false, 00:05:45.736 "zoned": false, 00:05:45.736 "supported_io_types": { 00:05:45.736 "read": true, 00:05:45.736 "write": true, 00:05:45.736 "unmap": true, 00:05:45.736 "flush": true, 00:05:45.736 "reset": true, 00:05:45.736 "nvme_admin": false, 00:05:45.736 "nvme_io": false, 00:05:45.736 "nvme_io_md": false, 00:05:45.736 "write_zeroes": true, 00:05:45.736 "zcopy": true, 00:05:45.736 "get_zone_info": false, 00:05:45.736 "zone_management": false, 00:05:45.736 "zone_append": false, 00:05:45.736 "compare": false, 00:05:45.736 "compare_and_write": false, 00:05:45.736 "abort": true, 00:05:45.736 "seek_hole": false, 00:05:45.736 "seek_data": false, 00:05:45.736 "copy": true, 00:05:45.736 "nvme_iov_md": false 00:05:45.736 }, 00:05:45.736 "memory_domains": [ 00:05:45.736 { 00:05:45.736 "dma_device_id": "system", 00:05:45.736 "dma_device_type": 1 00:05:45.736 }, 00:05:45.736 { 00:05:45.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.736 "dma_device_type": 2 00:05:45.736 } 00:05:45.736 ], 00:05:45.736 "driver_specific": {} 00:05:45.736 } 00:05:45.736 ]' 00:05:45.736 15:01:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:45.996 15:01:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:45.996 15:01:32 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:45.996 15:01:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.996 15:01:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.996 15:01:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.996 15:01:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:45.996 15:01:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.996 15:01:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.996 15:01:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.996 15:01:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:45.996 15:01:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:45.996 15:01:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:45.996 00:05:45.996 real 0m0.229s 00:05:45.996 user 0m0.179s 00:05:45.996 sys 0m0.016s 00:05:45.996 15:01:32 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.996 15:01:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.996 ************************************ 00:05:45.996 END TEST rpc_plugins 00:05:45.996 ************************************ 00:05:45.996 15:01:32 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:45.996 15:01:32 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.996 15:01:32 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.996 15:01:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.996 ************************************ 00:05:45.996 START TEST rpc_trace_cmd_test 00:05:45.996 ************************************ 00:05:45.996 15:01:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:45.996 15:01:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:45.996 15:01:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:45.996 15:01:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.996 15:01:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:45.996 15:01:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.996 15:01:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:45.996 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3038496", 00:05:45.996 "tpoint_group_mask": "0x8", 00:05:45.996 "iscsi_conn": { 00:05:45.996 "mask": "0x2", 00:05:45.996 "tpoint_mask": "0x0" 00:05:45.997 }, 00:05:45.997 "scsi": { 00:05:45.997 "mask": "0x4", 00:05:45.997 "tpoint_mask": "0x0" 00:05:45.997 }, 00:05:45.997 "bdev": { 00:05:45.997 "mask": "0x8", 00:05:45.997 "tpoint_mask": "0xffffffffffffffff" 00:05:45.997 }, 00:05:45.997 "nvmf_rdma": { 00:05:45.997 "mask": "0x10", 00:05:45.997 "tpoint_mask": "0x0" 00:05:45.997 }, 00:05:45.997 "nvmf_tcp": { 00:05:45.997 "mask": "0x20", 00:05:45.997 "tpoint_mask": "0x0" 00:05:45.997 }, 00:05:45.997 "ftl": { 00:05:45.997 "mask": "0x40", 00:05:45.997 "tpoint_mask": "0x0" 00:05:45.997 }, 00:05:45.997 "blobfs": { 00:05:45.997 "mask": "0x80", 00:05:45.997 "tpoint_mask": "0x0" 00:05:45.997 }, 00:05:45.997 "dsa": { 00:05:45.997 "mask": "0x200", 00:05:45.997 "tpoint_mask": "0x0" 00:05:45.997 }, 00:05:45.997 "thread": { 00:05:45.997 "mask": "0x400", 00:05:45.997 "tpoint_mask": "0x0" 00:05:45.997 }, 00:05:45.997 "nvme_pcie": { 00:05:45.997 "mask": "0x800", 00:05:45.997 "tpoint_mask": "0x0" 00:05:45.997 }, 00:05:45.997 "iaa": { 00:05:45.997 "mask": "0x1000", 00:05:45.997 "tpoint_mask": "0x0" 00:05:45.997 }, 00:05:45.997 "nvme_tcp": { 00:05:45.997 "mask": "0x2000", 00:05:45.997 "tpoint_mask": "0x0" 00:05:45.997 }, 00:05:45.997 "bdev_nvme": { 00:05:45.997 "mask": "0x4000", 00:05:45.997 "tpoint_mask": "0x0" 00:05:45.997 }, 00:05:45.997 "sock": { 00:05:45.997 "mask": "0x8000", 00:05:45.997 "tpoint_mask": "0x0" 00:05:45.997 }, 00:05:45.997 "blob": { 00:05:45.997 "mask": "0x10000", 00:05:45.997 "tpoint_mask": "0x0" 00:05:45.997 }, 00:05:45.997 "bdev_raid": { 00:05:45.997 "mask": "0x20000", 00:05:45.997 "tpoint_mask": "0x0" 00:05:45.997 }, 00:05:45.997 "scheduler": { 00:05:45.997 "mask": "0x40000", 00:05:45.997 "tpoint_mask": "0x0" 00:05:45.997 } 00:05:45.997 }' 00:05:45.997 15:01:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:46.257 15:01:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:46.257 15:01:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:46.257 15:01:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:46.257 15:01:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:46.257 15:01:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:46.257 15:01:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:46.257 15:01:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:46.257 15:01:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:46.517 15:01:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:46.517 00:05:46.517 real 0m0.306s 00:05:46.517 user 0m0.264s 00:05:46.517 sys 0m0.034s 00:05:46.517 15:01:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.517 15:01:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:46.517 ************************************ 00:05:46.517 END TEST rpc_trace_cmd_test 00:05:46.517 ************************************ 00:05:46.517 15:01:33 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:46.517 15:01:33 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:46.517 15:01:33 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:46.517 15:01:33 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.517 15:01:33 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.517 15:01:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.517 ************************************ 00:05:46.517 START TEST rpc_daemon_integrity 00:05:46.517 ************************************ 00:05:46.517 15:01:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:46.517 15:01:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:46.517 15:01:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.517 15:01:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.517 15:01:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.517 15:01:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:46.517 15:01:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:46.517 15:01:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:46.517 15:01:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:46.517 15:01:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.517 15:01:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.517 15:01:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.517 15:01:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:46.517 15:01:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:46.517 15:01:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.517 15:01:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.517 15:01:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.517 15:01:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:46.517 { 00:05:46.517 "name": "Malloc2", 00:05:46.517 "aliases": [ 00:05:46.517 "5a271378-b583-4538-ba8a-4580514cd85e" 00:05:46.517 ], 00:05:46.517 "product_name": "Malloc disk", 00:05:46.517 "block_size": 512, 00:05:46.517 "num_blocks": 16384, 00:05:46.517 "uuid": "5a271378-b583-4538-ba8a-4580514cd85e", 00:05:46.517 "assigned_rate_limits": { 00:05:46.517 "rw_ios_per_sec": 0, 00:05:46.517 "rw_mbytes_per_sec": 0, 00:05:46.517 "r_mbytes_per_sec": 0, 00:05:46.517 "w_mbytes_per_sec": 0 00:05:46.517 }, 00:05:46.517 "claimed": false, 00:05:46.517 "zoned": false, 00:05:46.517 "supported_io_types": { 00:05:46.517 "read": true, 00:05:46.517 "write": true, 00:05:46.517 "unmap": true, 00:05:46.517 "flush": true, 00:05:46.517 "reset": true, 00:05:46.517 "nvme_admin": false, 00:05:46.517 "nvme_io": false, 00:05:46.517 "nvme_io_md": false, 00:05:46.517 "write_zeroes": true, 00:05:46.517 "zcopy": true, 00:05:46.517 "get_zone_info": false, 00:05:46.517 "zone_management": false, 00:05:46.517 "zone_append": false, 00:05:46.517 "compare": false, 00:05:46.517 "compare_and_write": false, 00:05:46.517 "abort": true, 00:05:46.517 "seek_hole": false, 00:05:46.517 "seek_data": false, 00:05:46.517 "copy": true, 00:05:46.517 "nvme_iov_md": false 00:05:46.517 }, 00:05:46.517 "memory_domains": [ 00:05:46.517 { 00:05:46.517 "dma_device_id": "system", 00:05:46.517 "dma_device_type": 1 00:05:46.517 }, 00:05:46.517 { 00:05:46.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.517 "dma_device_type": 2 00:05:46.517 } 00:05:46.517 ], 00:05:46.517 "driver_specific": {} 00:05:46.517 } 00:05:46.517 ]' 00:05:46.517 15:01:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:46.517 15:01:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:46.517 15:01:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:46.517 15:01:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.517 15:01:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.517 [2024-10-28 15:01:33.358503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:46.518 [2024-10-28 15:01:33.358600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:46.518 [2024-10-28 15:01:33.358670] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14e5c70 00:05:46.518 [2024-10-28 15:01:33.358710] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:46.518 [2024-10-28 15:01:33.361151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:46.518 [2024-10-28 15:01:33.361215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:46.518 Passthru0 00:05:46.518 15:01:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.518 15:01:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:46.518 15:01:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.518 15:01:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.778 15:01:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.778 15:01:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:46.778 { 00:05:46.778 "name": "Malloc2", 00:05:46.778 "aliases": [ 00:05:46.778 "5a271378-b583-4538-ba8a-4580514cd85e" 00:05:46.778 ], 00:05:46.778 "product_name": "Malloc disk", 00:05:46.778 "block_size": 512, 00:05:46.778 "num_blocks": 16384, 00:05:46.778 "uuid": "5a271378-b583-4538-ba8a-4580514cd85e", 00:05:46.778 "assigned_rate_limits": { 00:05:46.778 "rw_ios_per_sec": 0, 00:05:46.778 "rw_mbytes_per_sec": 0, 00:05:46.778 "r_mbytes_per_sec": 0, 00:05:46.778 "w_mbytes_per_sec": 0 00:05:46.778 }, 00:05:46.778 "claimed": true, 00:05:46.778 "claim_type": "exclusive_write", 00:05:46.778 "zoned": false, 00:05:46.778 "supported_io_types": { 00:05:46.778 "read": true, 00:05:46.778 "write": true, 00:05:46.778 "unmap": true, 00:05:46.778 "flush": true, 00:05:46.778 "reset": true, 00:05:46.778 "nvme_admin": false, 00:05:46.778 "nvme_io": false, 00:05:46.778 "nvme_io_md": false, 00:05:46.778 "write_zeroes": true, 00:05:46.778 "zcopy": true, 00:05:46.778 "get_zone_info": false, 00:05:46.778 "zone_management": false, 00:05:46.778 "zone_append": false, 00:05:46.778 "compare": false, 00:05:46.778 "compare_and_write": false, 00:05:46.778 "abort": true, 00:05:46.778 "seek_hole": false, 00:05:46.778 "seek_data": false, 00:05:46.778 "copy": true, 00:05:46.778 "nvme_iov_md": false 00:05:46.778 }, 00:05:46.778 "memory_domains": [ 00:05:46.778 { 00:05:46.778 "dma_device_id": "system", 00:05:46.778 "dma_device_type": 1 00:05:46.778 }, 00:05:46.778 { 00:05:46.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.778 "dma_device_type": 2 00:05:46.778 } 00:05:46.778 ], 00:05:46.778 "driver_specific": {} 00:05:46.778 }, 00:05:46.778 { 00:05:46.778 "name": "Passthru0", 00:05:46.778 "aliases": [ 00:05:46.778 "48b9ea34-1120-545b-b8fc-17c42336716a" 00:05:46.778 ], 00:05:46.778 "product_name": "passthru", 00:05:46.778 "block_size": 512, 00:05:46.778 "num_blocks": 16384, 00:05:46.778 "uuid": "48b9ea34-1120-545b-b8fc-17c42336716a", 00:05:46.778 "assigned_rate_limits": { 00:05:46.778 "rw_ios_per_sec": 0, 00:05:46.778 "rw_mbytes_per_sec": 0, 00:05:46.778 "r_mbytes_per_sec": 0, 00:05:46.778 "w_mbytes_per_sec": 0 00:05:46.778 }, 00:05:46.778 "claimed": false, 00:05:46.778 "zoned": false, 00:05:46.778 "supported_io_types": { 00:05:46.778 "read": true, 00:05:46.778 "write": true, 00:05:46.778 "unmap": true, 00:05:46.778 "flush": true, 00:05:46.778 "reset": true, 00:05:46.778 "nvme_admin": false, 00:05:46.778 "nvme_io": false, 00:05:46.778 "nvme_io_md": false, 00:05:46.778 "write_zeroes": true, 00:05:46.778 "zcopy": true, 00:05:46.778 "get_zone_info": false, 00:05:46.778 "zone_management": false, 00:05:46.778 "zone_append": false, 00:05:46.778 "compare": false, 00:05:46.778 "compare_and_write": false, 00:05:46.778 "abort": true, 00:05:46.778 "seek_hole": false, 00:05:46.778 "seek_data": false, 00:05:46.778 "copy": true, 00:05:46.779 "nvme_iov_md": false 00:05:46.779 }, 00:05:46.779 "memory_domains": [ 00:05:46.779 { 00:05:46.779 "dma_device_id": "system", 00:05:46.779 "dma_device_type": 1 00:05:46.779 }, 00:05:46.779 { 00:05:46.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.779 "dma_device_type": 2 00:05:46.779 } 00:05:46.779 ], 00:05:46.779 "driver_specific": { 00:05:46.779 "passthru": { 00:05:46.779 "name": "Passthru0", 00:05:46.779 "base_bdev_name": "Malloc2" 00:05:46.779 } 00:05:46.779 } 00:05:46.779 } 00:05:46.779 ]' 00:05:46.779 15:01:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:46.779 15:01:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:46.779 15:01:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:46.779 15:01:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.779 15:01:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.779 15:01:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.779 15:01:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:46.779 15:01:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.779 15:01:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.779 15:01:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.779 15:01:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:46.779 15:01:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.779 15:01:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.779 15:01:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.779 15:01:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:46.779 15:01:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:46.779 15:01:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:46.779 00:05:46.779 real 0m0.396s 00:05:46.779 user 0m0.296s 00:05:46.779 sys 0m0.042s 00:05:46.779 15:01:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.779 15:01:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.779 ************************************ 00:05:46.779 END TEST rpc_daemon_integrity 00:05:46.779 ************************************ 00:05:46.779 15:01:33 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:46.779 15:01:33 rpc -- rpc/rpc.sh@84 -- # killprocess 3038496 00:05:46.779 15:01:33 rpc -- common/autotest_common.sh@950 -- # '[' -z 3038496 ']' 00:05:46.779 15:01:33 rpc -- common/autotest_common.sh@954 -- # kill -0 3038496 00:05:46.779 15:01:33 rpc -- common/autotest_common.sh@955 -- # uname 00:05:46.779 15:01:33 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.779 15:01:33 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3038496 00:05:47.039 15:01:33 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:47.039 15:01:33 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:47.039 15:01:33 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3038496' 00:05:47.039 killing process with pid 3038496 00:05:47.039 15:01:33 rpc -- common/autotest_common.sh@969 -- # kill 3038496 00:05:47.039 15:01:33 rpc -- common/autotest_common.sh@974 -- # wait 3038496 00:05:47.607 00:05:47.607 real 0m3.628s 00:05:47.607 user 0m4.680s 00:05:47.607 sys 0m1.074s 00:05:47.607 15:01:34 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.607 15:01:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.607 ************************************ 00:05:47.607 END TEST rpc 00:05:47.607 ************************************ 00:05:47.607 15:01:34 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:47.607 15:01:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.607 15:01:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.607 15:01:34 -- common/autotest_common.sh@10 -- # set +x 00:05:47.607 ************************************ 00:05:47.607 START TEST skip_rpc 00:05:47.607 ************************************ 00:05:47.607 15:01:34 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:47.607 * Looking for test storage... 00:05:47.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:47.607 15:01:34 skip_rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:47.607 15:01:34 skip_rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:05:47.607 15:01:34 skip_rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:47.867 15:01:34 skip_rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:47.867 15:01:34 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.867 15:01:34 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.867 15:01:34 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.867 15:01:34 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.867 15:01:34 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.867 15:01:34 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.867 15:01:34 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.867 15:01:34 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.867 15:01:34 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.867 15:01:34 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.867 15:01:34 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.867 15:01:34 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:47.867 15:01:34 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:47.867 15:01:34 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.867 15:01:34 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.867 15:01:34 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:47.867 15:01:34 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:47.867 15:01:34 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.867 15:01:34 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:47.867 15:01:34 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.867 15:01:34 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:47.867 15:01:34 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:47.867 15:01:34 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.867 15:01:34 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:47.867 15:01:34 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.867 15:01:34 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.867 15:01:34 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.867 15:01:34 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:47.867 15:01:34 skip_rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.867 15:01:34 skip_rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:47.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.867 --rc genhtml_branch_coverage=1 00:05:47.867 --rc genhtml_function_coverage=1 00:05:47.867 --rc genhtml_legend=1 00:05:47.867 --rc geninfo_all_blocks=1 00:05:47.867 --rc geninfo_unexecuted_blocks=1 00:05:47.867 00:05:47.867 ' 00:05:47.867 15:01:34 skip_rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:47.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.867 --rc genhtml_branch_coverage=1 00:05:47.867 --rc genhtml_function_coverage=1 00:05:47.867 --rc genhtml_legend=1 00:05:47.867 --rc geninfo_all_blocks=1 00:05:47.867 --rc geninfo_unexecuted_blocks=1 00:05:47.867 00:05:47.867 ' 00:05:47.867 15:01:34 skip_rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:47.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.867 --rc genhtml_branch_coverage=1 00:05:47.867 --rc genhtml_function_coverage=1 00:05:47.867 --rc genhtml_legend=1 00:05:47.867 --rc geninfo_all_blocks=1 00:05:47.867 --rc geninfo_unexecuted_blocks=1 00:05:47.867 00:05:47.867 ' 00:05:47.867 15:01:34 skip_rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:47.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.867 --rc genhtml_branch_coverage=1 00:05:47.867 --rc genhtml_function_coverage=1 00:05:47.867 --rc genhtml_legend=1 00:05:47.867 --rc geninfo_all_blocks=1 00:05:47.867 --rc geninfo_unexecuted_blocks=1 00:05:47.867 00:05:47.867 ' 00:05:47.867 15:01:34 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:47.867 15:01:34 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:47.867 15:01:34 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:47.867 15:01:34 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.867 15:01:34 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.867 15:01:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.867 ************************************ 00:05:47.867 START TEST skip_rpc 00:05:47.867 ************************************ 00:05:47.868 15:01:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:47.868 15:01:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3039173 00:05:47.868 15:01:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:47.868 15:01:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.868 15:01:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:47.868 [2024-10-28 15:01:34.703515] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:05:47.868 [2024-10-28 15:01:34.703697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3039173 ] 00:05:48.128 [2024-10-28 15:01:34.874958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.389 [2024-10-28 15:01:34.996026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.675 15:01:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:53.675 15:01:39 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:53.675 15:01:39 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:53.675 15:01:39 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:53.675 15:01:39 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.675 15:01:39 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:53.675 15:01:39 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.675 15:01:39 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:53.675 15:01:39 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.675 15:01:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.675 15:01:39 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:53.675 15:01:39 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:53.675 15:01:39 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:53.675 15:01:39 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:53.675 15:01:39 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:53.675 15:01:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:53.675 15:01:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3039173 00:05:53.675 15:01:39 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 3039173 ']' 00:05:53.675 15:01:39 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 3039173 00:05:53.676 15:01:39 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:53.676 15:01:39 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:53.676 15:01:39 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3039173 00:05:53.676 15:01:39 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:53.676 15:01:39 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:53.676 15:01:39 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3039173' 00:05:53.676 killing process with pid 3039173 00:05:53.676 15:01:39 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 3039173 00:05:53.676 15:01:39 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 3039173 00:05:53.676 00:05:53.676 real 0m5.611s 00:05:53.676 user 0m5.045s 00:05:53.676 sys 0m0.602s 00:05:53.676 15:01:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.676 15:01:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.676 ************************************ 00:05:53.676 END TEST skip_rpc 00:05:53.676 ************************************ 00:05:53.676 15:01:40 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:53.676 15:01:40 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.676 15:01:40 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.676 15:01:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.676 ************************************ 00:05:53.676 START TEST skip_rpc_with_json 00:05:53.676 ************************************ 00:05:53.676 15:01:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:53.676 15:01:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:53.676 15:01:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3039866 00:05:53.676 15:01:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.676 15:01:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.676 15:01:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3039866 00:05:53.676 15:01:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 3039866 ']' 00:05:53.676 15:01:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.676 15:01:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.676 15:01:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.676 15:01:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.676 15:01:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.676 [2024-10-28 15:01:40.340496] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:05:53.676 [2024-10-28 15:01:40.340596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3039866 ] 00:05:53.676 [2024-10-28 15:01:40.468608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.938 [2024-10-28 15:01:40.595351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.555 15:01:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.555 15:01:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:54.555 15:01:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:54.555 15:01:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.555 15:01:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.555 [2024-10-28 15:01:41.097536] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:54.555 request: 00:05:54.555 { 00:05:54.555 "trtype": "tcp", 00:05:54.555 "method": "nvmf_get_transports", 00:05:54.555 "req_id": 1 00:05:54.555 } 00:05:54.555 Got JSON-RPC error response 00:05:54.555 response: 00:05:54.555 { 00:05:54.555 "code": -19, 00:05:54.555 "message": "No such device" 00:05:54.555 } 00:05:54.555 15:01:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:54.555 15:01:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:54.555 15:01:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.555 15:01:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.555 [2024-10-28 15:01:41.109865] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:54.555 15:01:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.555 15:01:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:54.555 15:01:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.555 15:01:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.555 15:01:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.555 15:01:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:54.555 { 00:05:54.555 "subsystems": [ 00:05:54.555 { 00:05:54.555 "subsystem": "fsdev", 00:05:54.555 "config": [ 00:05:54.555 { 00:05:54.555 "method": "fsdev_set_opts", 00:05:54.555 "params": { 00:05:54.555 "fsdev_io_pool_size": 65535, 00:05:54.555 "fsdev_io_cache_size": 256 00:05:54.555 } 00:05:54.555 } 00:05:54.555 ] 00:05:54.555 }, 00:05:54.555 { 00:05:54.555 "subsystem": "vfio_user_target", 00:05:54.555 "config": null 00:05:54.555 }, 00:05:54.555 { 00:05:54.555 "subsystem": "keyring", 00:05:54.555 "config": [] 00:05:54.555 }, 00:05:54.555 { 00:05:54.555 "subsystem": "iobuf", 00:05:54.556 "config": [ 00:05:54.556 { 00:05:54.556 "method": "iobuf_set_options", 00:05:54.556 "params": { 00:05:54.556 "small_pool_count": 8192, 00:05:54.556 "large_pool_count": 1024, 00:05:54.556 "small_bufsize": 8192, 00:05:54.556 "large_bufsize": 135168, 00:05:54.556 "enable_numa": false 00:05:54.556 } 00:05:54.556 } 00:05:54.556 ] 00:05:54.556 }, 00:05:54.556 { 00:05:54.556 "subsystem": "sock", 00:05:54.556 "config": [ 00:05:54.556 { 00:05:54.556 "method": "sock_set_default_impl", 00:05:54.556 "params": { 00:05:54.556 "impl_name": "posix" 00:05:54.556 } 00:05:54.556 }, 00:05:54.556 { 00:05:54.556 "method": "sock_impl_set_options", 00:05:54.556 "params": { 00:05:54.556 "impl_name": "ssl", 00:05:54.556 "recv_buf_size": 4096, 00:05:54.556 "send_buf_size": 4096, 00:05:54.556 "enable_recv_pipe": true, 00:05:54.556 "enable_quickack": false, 00:05:54.556 "enable_placement_id": 0, 00:05:54.556 "enable_zerocopy_send_server": true, 00:05:54.556 "enable_zerocopy_send_client": false, 00:05:54.556 "zerocopy_threshold": 0, 00:05:54.556 "tls_version": 0, 00:05:54.556 "enable_ktls": false 00:05:54.556 } 00:05:54.556 }, 00:05:54.556 { 00:05:54.556 "method": "sock_impl_set_options", 00:05:54.556 "params": { 00:05:54.556 "impl_name": "posix", 00:05:54.556 "recv_buf_size": 2097152, 00:05:54.556 "send_buf_size": 2097152, 00:05:54.556 "enable_recv_pipe": true, 00:05:54.556 "enable_quickack": false, 00:05:54.556 "enable_placement_id": 0, 00:05:54.556 "enable_zerocopy_send_server": true, 00:05:54.556 "enable_zerocopy_send_client": false, 00:05:54.556 "zerocopy_threshold": 0, 00:05:54.556 "tls_version": 0, 00:05:54.556 "enable_ktls": false 00:05:54.556 } 00:05:54.556 } 00:05:54.556 ] 00:05:54.556 }, 00:05:54.556 { 00:05:54.556 "subsystem": "vmd", 00:05:54.556 "config": [] 00:05:54.556 }, 00:05:54.556 { 00:05:54.556 "subsystem": "accel", 00:05:54.556 "config": [ 00:05:54.556 { 00:05:54.556 "method": "accel_set_options", 00:05:54.556 "params": { 00:05:54.556 "small_cache_size": 128, 00:05:54.556 "large_cache_size": 16, 00:05:54.556 "task_count": 2048, 00:05:54.556 "sequence_count": 2048, 00:05:54.556 "buf_count": 2048 00:05:54.556 } 00:05:54.556 } 00:05:54.556 ] 00:05:54.556 }, 00:05:54.556 { 00:05:54.556 "subsystem": "bdev", 00:05:54.556 "config": [ 00:05:54.556 { 00:05:54.556 "method": "bdev_set_options", 00:05:54.556 "params": { 00:05:54.556 "bdev_io_pool_size": 65535, 00:05:54.556 "bdev_io_cache_size": 256, 00:05:54.556 "bdev_auto_examine": true, 00:05:54.556 "iobuf_small_cache_size": 128, 00:05:54.556 "iobuf_large_cache_size": 16 00:05:54.556 } 00:05:54.556 }, 00:05:54.556 { 00:05:54.556 "method": "bdev_raid_set_options", 00:05:54.556 "params": { 00:05:54.556 "process_window_size_kb": 1024, 00:05:54.556 "process_max_bandwidth_mb_sec": 0 00:05:54.556 } 00:05:54.556 }, 00:05:54.556 { 00:05:54.556 "method": "bdev_iscsi_set_options", 00:05:54.556 "params": { 00:05:54.556 "timeout_sec": 30 00:05:54.556 } 00:05:54.556 }, 00:05:54.556 { 00:05:54.556 "method": "bdev_nvme_set_options", 00:05:54.556 "params": { 00:05:54.556 "action_on_timeout": "none", 00:05:54.556 "timeout_us": 0, 00:05:54.556 "timeout_admin_us": 0, 00:05:54.556 "keep_alive_timeout_ms": 10000, 00:05:54.556 "arbitration_burst": 0, 00:05:54.556 "low_priority_weight": 0, 00:05:54.556 "medium_priority_weight": 0, 00:05:54.556 "high_priority_weight": 0, 00:05:54.556 "nvme_adminq_poll_period_us": 10000, 00:05:54.556 "nvme_ioq_poll_period_us": 0, 00:05:54.556 "io_queue_requests": 0, 00:05:54.556 "delay_cmd_submit": true, 00:05:54.556 "transport_retry_count": 4, 00:05:54.556 "bdev_retry_count": 3, 00:05:54.556 "transport_ack_timeout": 0, 00:05:54.556 "ctrlr_loss_timeout_sec": 0, 00:05:54.556 "reconnect_delay_sec": 0, 00:05:54.556 "fast_io_fail_timeout_sec": 0, 00:05:54.556 "disable_auto_failback": false, 00:05:54.556 "generate_uuids": false, 00:05:54.556 "transport_tos": 0, 00:05:54.556 "nvme_error_stat": false, 00:05:54.556 "rdma_srq_size": 0, 00:05:54.556 "io_path_stat": false, 00:05:54.556 "allow_accel_sequence": false, 00:05:54.556 "rdma_max_cq_size": 0, 00:05:54.556 "rdma_cm_event_timeout_ms": 0, 00:05:54.556 "dhchap_digests": [ 00:05:54.556 "sha256", 00:05:54.556 "sha384", 00:05:54.556 "sha512" 00:05:54.556 ], 00:05:54.556 "dhchap_dhgroups": [ 00:05:54.556 "null", 00:05:54.556 "ffdhe2048", 00:05:54.556 "ffdhe3072", 00:05:54.556 "ffdhe4096", 00:05:54.556 "ffdhe6144", 00:05:54.556 "ffdhe8192" 00:05:54.556 ] 00:05:54.556 } 00:05:54.556 }, 00:05:54.556 { 00:05:54.556 "method": "bdev_nvme_set_hotplug", 00:05:54.556 "params": { 00:05:54.556 "period_us": 100000, 00:05:54.556 "enable": false 00:05:54.556 } 00:05:54.556 }, 00:05:54.556 { 00:05:54.556 "method": "bdev_wait_for_examine" 00:05:54.556 } 00:05:54.556 ] 00:05:54.556 }, 00:05:54.556 { 00:05:54.556 "subsystem": "scsi", 00:05:54.556 "config": null 00:05:54.556 }, 00:05:54.556 { 00:05:54.556 "subsystem": "scheduler", 00:05:54.556 "config": [ 00:05:54.556 { 00:05:54.556 "method": "framework_set_scheduler", 00:05:54.556 "params": { 00:05:54.556 "name": "static" 00:05:54.556 } 00:05:54.556 } 00:05:54.556 ] 00:05:54.556 }, 00:05:54.556 { 00:05:54.556 "subsystem": "vhost_scsi", 00:05:54.556 "config": [] 00:05:54.556 }, 00:05:54.556 { 00:05:54.556 "subsystem": "vhost_blk", 00:05:54.556 "config": [] 00:05:54.556 }, 00:05:54.556 { 00:05:54.556 "subsystem": "ublk", 00:05:54.556 "config": [] 00:05:54.556 }, 00:05:54.556 { 00:05:54.556 "subsystem": "nbd", 00:05:54.556 "config": [] 00:05:54.556 }, 00:05:54.556 { 00:05:54.556 "subsystem": "nvmf", 00:05:54.556 "config": [ 00:05:54.556 { 00:05:54.556 "method": "nvmf_set_config", 00:05:54.556 "params": { 00:05:54.556 "discovery_filter": "match_any", 00:05:54.556 "admin_cmd_passthru": { 00:05:54.556 "identify_ctrlr": false 00:05:54.556 }, 00:05:54.556 "dhchap_digests": [ 00:05:54.556 "sha256", 00:05:54.556 "sha384", 00:05:54.556 "sha512" 00:05:54.556 ], 00:05:54.556 "dhchap_dhgroups": [ 00:05:54.556 "null", 00:05:54.556 "ffdhe2048", 00:05:54.556 "ffdhe3072", 00:05:54.556 "ffdhe4096", 00:05:54.556 "ffdhe6144", 00:05:54.556 "ffdhe8192" 00:05:54.556 ] 00:05:54.556 } 00:05:54.556 }, 00:05:54.556 { 00:05:54.556 "method": "nvmf_set_max_subsystems", 00:05:54.556 "params": { 00:05:54.556 "max_subsystems": 1024 00:05:54.556 } 00:05:54.556 }, 00:05:54.556 { 00:05:54.556 "method": "nvmf_set_crdt", 00:05:54.556 "params": { 00:05:54.556 "crdt1": 0, 00:05:54.556 "crdt2": 0, 00:05:54.556 "crdt3": 0 00:05:54.556 } 00:05:54.556 }, 00:05:54.556 { 00:05:54.556 "method": "nvmf_create_transport", 00:05:54.556 "params": { 00:05:54.556 "trtype": "TCP", 00:05:54.556 "max_queue_depth": 128, 00:05:54.556 "max_io_qpairs_per_ctrlr": 127, 00:05:54.556 "in_capsule_data_size": 4096, 00:05:54.556 "max_io_size": 131072, 00:05:54.556 "io_unit_size": 131072, 00:05:54.556 "max_aq_depth": 128, 00:05:54.556 "num_shared_buffers": 511, 00:05:54.556 "buf_cache_size": 4294967295, 00:05:54.556 "dif_insert_or_strip": false, 00:05:54.556 "zcopy": false, 00:05:54.556 "c2h_success": true, 00:05:54.556 "sock_priority": 0, 00:05:54.556 "abort_timeout_sec": 1, 00:05:54.556 "ack_timeout": 0, 00:05:54.556 "data_wr_pool_size": 0 00:05:54.556 } 00:05:54.556 } 00:05:54.556 ] 00:05:54.556 }, 00:05:54.556 { 00:05:54.556 "subsystem": "iscsi", 00:05:54.556 "config": [ 00:05:54.556 { 00:05:54.556 "method": "iscsi_set_options", 00:05:54.556 "params": { 00:05:54.556 "node_base": "iqn.2016-06.io.spdk", 00:05:54.556 "max_sessions": 128, 00:05:54.556 "max_connections_per_session": 2, 00:05:54.556 "max_queue_depth": 64, 00:05:54.556 "default_time2wait": 2, 00:05:54.556 "default_time2retain": 20, 00:05:54.556 "first_burst_length": 8192, 00:05:54.556 "immediate_data": true, 00:05:54.556 "allow_duplicated_isid": false, 00:05:54.556 "error_recovery_level": 0, 00:05:54.556 "nop_timeout": 60, 00:05:54.556 "nop_in_interval": 30, 00:05:54.556 "disable_chap": false, 00:05:54.556 "require_chap": false, 00:05:54.556 "mutual_chap": false, 00:05:54.556 "chap_group": 0, 00:05:54.556 "max_large_datain_per_connection": 64, 00:05:54.556 "max_r2t_per_connection": 4, 00:05:54.556 "pdu_pool_size": 36864, 00:05:54.556 "immediate_data_pool_size": 16384, 00:05:54.556 "data_out_pool_size": 2048 00:05:54.556 } 00:05:54.556 } 00:05:54.556 ] 00:05:54.556 } 00:05:54.556 ] 00:05:54.556 } 00:05:54.557 15:01:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:54.557 15:01:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3039866 00:05:54.557 15:01:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3039866 ']' 00:05:54.557 15:01:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3039866 00:05:54.557 15:01:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:54.557 15:01:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.557 15:01:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3039866 00:05:54.557 15:01:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:54.557 15:01:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:54.557 15:01:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3039866' 00:05:54.557 killing process with pid 3039866 00:05:54.557 15:01:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3039866 00:05:54.557 15:01:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3039866 00:05:55.133 15:01:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3040009 00:05:55.133 15:01:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:55.133 15:01:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:00.417 15:01:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3040009 00:06:00.417 15:01:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3040009 ']' 00:06:00.417 15:01:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3040009 00:06:00.417 15:01:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:00.417 15:01:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.417 15:01:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3040009 00:06:00.417 15:01:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.417 15:01:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.417 15:01:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3040009' 00:06:00.417 killing process with pid 3040009 00:06:00.417 15:01:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3040009 00:06:00.417 15:01:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3040009 00:06:00.986 15:01:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:00.986 15:01:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:00.986 00:06:00.986 real 0m7.333s 00:06:00.986 user 0m6.818s 00:06:00.986 sys 0m1.235s 00:06:00.986 15:01:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.986 15:01:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:00.987 ************************************ 00:06:00.987 END TEST skip_rpc_with_json 00:06:00.987 ************************************ 00:06:00.987 15:01:47 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:00.987 15:01:47 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.987 15:01:47 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.987 15:01:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.987 ************************************ 00:06:00.987 START TEST skip_rpc_with_delay 00:06:00.987 ************************************ 00:06:00.987 15:01:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:00.987 15:01:47 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:00.987 15:01:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:00.987 15:01:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:00.987 15:01:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.987 15:01:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.987 15:01:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.987 15:01:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.987 15:01:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.987 15:01:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.987 15:01:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.987 15:01:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:00.987 15:01:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:00.987 [2024-10-28 15:01:47.740915] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:00.987 15:01:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:00.987 15:01:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:00.987 15:01:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:00.987 15:01:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:00.987 00:06:00.987 real 0m0.084s 00:06:00.987 user 0m0.055s 00:06:00.987 sys 0m0.029s 00:06:00.987 15:01:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.987 15:01:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:00.987 ************************************ 00:06:00.987 END TEST skip_rpc_with_delay 00:06:00.987 ************************************ 00:06:00.987 15:01:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:00.987 15:01:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:00.987 15:01:47 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:00.987 15:01:47 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.987 15:01:47 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.987 15:01:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.987 ************************************ 00:06:00.987 START TEST exit_on_failed_rpc_init 00:06:00.987 ************************************ 00:06:00.987 15:01:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:00.987 15:01:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3040728 00:06:00.987 15:01:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.987 15:01:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3040728 00:06:00.987 15:01:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 3040728 ']' 00:06:00.987 15:01:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.987 15:01:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.987 15:01:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.987 15:01:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.987 15:01:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:01.247 [2024-10-28 15:01:47.964148] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:06:01.247 [2024-10-28 15:01:47.964344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3040728 ] 00:06:01.509 [2024-10-28 15:01:48.131448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.509 [2024-10-28 15:01:48.237751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.082 15:01:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.082 15:01:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:02.082 15:01:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.082 15:01:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.082 15:01:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:02.082 15:01:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.082 15:01:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.082 15:01:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.082 15:01:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.082 15:01:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.082 15:01:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.082 15:01:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.082 15:01:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.082 15:01:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:02.082 15:01:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.082 [2024-10-28 15:01:48.845246] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:06:02.082 [2024-10-28 15:01:48.845433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3040865 ] 00:06:02.340 [2024-10-28 15:01:49.013647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.340 [2024-10-28 15:01:49.130912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.341 [2024-10-28 15:01:49.131138] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:02.341 [2024-10-28 15:01:49.131189] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:02.341 [2024-10-28 15:01:49.131219] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:02.599 15:01:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:02.599 15:01:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:02.599 15:01:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:02.599 15:01:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:02.599 15:01:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:02.599 15:01:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:02.599 15:01:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:02.599 15:01:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3040728 00:06:02.599 15:01:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 3040728 ']' 00:06:02.599 15:01:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 3040728 00:06:02.599 15:01:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:02.599 15:01:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.599 15:01:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3040728 00:06:02.599 15:01:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.599 15:01:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.599 15:01:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3040728' 00:06:02.599 killing process with pid 3040728 00:06:02.599 15:01:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 3040728 00:06:02.599 15:01:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 3040728 00:06:03.164 00:06:03.164 real 0m2.042s 00:06:03.164 user 0m2.372s 00:06:03.164 sys 0m0.827s 00:06:03.164 15:01:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.164 15:01:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:03.164 ************************************ 00:06:03.164 END TEST exit_on_failed_rpc_init 00:06:03.164 ************************************ 00:06:03.164 15:01:49 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:03.164 00:06:03.164 real 0m15.620s 00:06:03.164 user 0m14.578s 00:06:03.164 sys 0m2.978s 00:06:03.164 15:01:49 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.164 15:01:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.164 ************************************ 00:06:03.164 END TEST skip_rpc 00:06:03.164 ************************************ 00:06:03.164 15:01:49 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:03.164 15:01:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.164 15:01:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.164 15:01:49 -- common/autotest_common.sh@10 -- # set +x 00:06:03.164 ************************************ 00:06:03.164 START TEST rpc_client 00:06:03.164 ************************************ 00:06:03.164 15:01:49 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:03.423 * Looking for test storage... 00:06:03.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:03.423 15:01:50 rpc_client -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:03.423 15:01:50 rpc_client -- common/autotest_common.sh@1689 -- # lcov --version 00:06:03.424 15:01:50 rpc_client -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:03.424 15:01:50 rpc_client -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:03.424 15:01:50 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.424 15:01:50 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.424 15:01:50 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.424 15:01:50 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.424 15:01:50 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.424 15:01:50 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.424 15:01:50 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.424 15:01:50 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.424 15:01:50 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.424 15:01:50 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.424 15:01:50 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.424 15:01:50 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:03.424 15:01:50 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:03.424 15:01:50 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.424 15:01:50 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.424 15:01:50 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:03.424 15:01:50 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:03.424 15:01:50 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.424 15:01:50 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:03.424 15:01:50 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.424 15:01:50 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:03.424 15:01:50 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:03.424 15:01:50 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.424 15:01:50 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:03.424 15:01:50 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.424 15:01:50 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.424 15:01:50 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.424 15:01:50 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:03.424 15:01:50 rpc_client -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.424 15:01:50 rpc_client -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:03.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.424 --rc genhtml_branch_coverage=1 00:06:03.424 --rc genhtml_function_coverage=1 00:06:03.424 --rc genhtml_legend=1 00:06:03.424 --rc geninfo_all_blocks=1 00:06:03.424 --rc geninfo_unexecuted_blocks=1 00:06:03.424 00:06:03.424 ' 00:06:03.424 15:01:50 rpc_client -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:03.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.424 --rc genhtml_branch_coverage=1 00:06:03.424 --rc genhtml_function_coverage=1 00:06:03.424 --rc genhtml_legend=1 00:06:03.424 --rc geninfo_all_blocks=1 00:06:03.424 --rc geninfo_unexecuted_blocks=1 00:06:03.424 00:06:03.424 ' 00:06:03.424 15:01:50 rpc_client -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:03.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.424 --rc genhtml_branch_coverage=1 00:06:03.424 --rc genhtml_function_coverage=1 00:06:03.424 --rc genhtml_legend=1 00:06:03.424 --rc geninfo_all_blocks=1 00:06:03.424 --rc geninfo_unexecuted_blocks=1 00:06:03.424 00:06:03.424 ' 00:06:03.424 15:01:50 rpc_client -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:03.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.424 --rc genhtml_branch_coverage=1 00:06:03.424 --rc genhtml_function_coverage=1 00:06:03.424 --rc genhtml_legend=1 00:06:03.424 --rc geninfo_all_blocks=1 00:06:03.424 --rc geninfo_unexecuted_blocks=1 00:06:03.424 00:06:03.424 ' 00:06:03.424 15:01:50 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:03.424 OK 00:06:03.424 15:01:50 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:03.424 00:06:03.424 real 0m0.180s 00:06:03.424 user 0m0.124s 00:06:03.424 sys 0m0.065s 00:06:03.424 15:01:50 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.424 15:01:50 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:03.424 ************************************ 00:06:03.424 END TEST rpc_client 00:06:03.424 ************************************ 00:06:03.424 15:01:50 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:03.424 15:01:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.424 15:01:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.424 15:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:03.424 ************************************ 00:06:03.424 START TEST json_config 00:06:03.424 ************************************ 00:06:03.424 15:01:50 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:03.686 15:01:50 json_config -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:03.686 15:01:50 json_config -- common/autotest_common.sh@1689 -- # lcov --version 00:06:03.686 15:01:50 json_config -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:03.686 15:01:50 json_config -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:03.686 15:01:50 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.686 15:01:50 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.686 15:01:50 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.686 15:01:50 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.686 15:01:50 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.686 15:01:50 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.686 15:01:50 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.686 15:01:50 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.686 15:01:50 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.686 15:01:50 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.686 15:01:50 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.686 15:01:50 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:03.686 15:01:50 json_config -- scripts/common.sh@345 -- # : 1 00:06:03.686 15:01:50 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.686 15:01:50 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.686 15:01:50 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:03.686 15:01:50 json_config -- scripts/common.sh@353 -- # local d=1 00:06:03.686 15:01:50 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.686 15:01:50 json_config -- scripts/common.sh@355 -- # echo 1 00:06:03.686 15:01:50 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.686 15:01:50 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:03.686 15:01:50 json_config -- scripts/common.sh@353 -- # local d=2 00:06:03.686 15:01:50 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.686 15:01:50 json_config -- scripts/common.sh@355 -- # echo 2 00:06:03.686 15:01:50 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.686 15:01:50 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.686 15:01:50 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.686 15:01:50 json_config -- scripts/common.sh@368 -- # return 0 00:06:03.686 15:01:50 json_config -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.686 15:01:50 json_config -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:03.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.686 --rc genhtml_branch_coverage=1 00:06:03.686 --rc genhtml_function_coverage=1 00:06:03.686 --rc genhtml_legend=1 00:06:03.686 --rc geninfo_all_blocks=1 00:06:03.686 --rc geninfo_unexecuted_blocks=1 00:06:03.686 00:06:03.686 ' 00:06:03.686 15:01:50 json_config -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:03.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.686 --rc genhtml_branch_coverage=1 00:06:03.686 --rc genhtml_function_coverage=1 00:06:03.686 --rc genhtml_legend=1 00:06:03.686 --rc geninfo_all_blocks=1 00:06:03.686 --rc geninfo_unexecuted_blocks=1 00:06:03.686 00:06:03.686 ' 00:06:03.686 15:01:50 json_config -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:03.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.686 --rc genhtml_branch_coverage=1 00:06:03.686 --rc genhtml_function_coverage=1 00:06:03.686 --rc genhtml_legend=1 00:06:03.686 --rc geninfo_all_blocks=1 00:06:03.686 --rc geninfo_unexecuted_blocks=1 00:06:03.686 00:06:03.686 ' 00:06:03.686 15:01:50 json_config -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:03.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.686 --rc genhtml_branch_coverage=1 00:06:03.686 --rc genhtml_function_coverage=1 00:06:03.686 --rc genhtml_legend=1 00:06:03.686 --rc geninfo_all_blocks=1 00:06:03.686 --rc geninfo_unexecuted_blocks=1 00:06:03.686 00:06:03.686 ' 00:06:03.686 15:01:50 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:03.686 15:01:50 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:03.686 15:01:50 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:03.686 15:01:50 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:03.686 15:01:50 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:03.686 15:01:50 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:03.686 15:01:50 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:03.686 15:01:50 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:03.686 15:01:50 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:03.686 15:01:50 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:03.686 15:01:50 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:03.686 15:01:50 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:03.686 15:01:50 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:03.686 15:01:50 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:03.686 15:01:50 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:03.686 15:01:50 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:03.686 15:01:50 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:03.686 15:01:50 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:03.686 15:01:50 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:03.686 15:01:50 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:03.686 15:01:50 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.686 15:01:50 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.686 15:01:50 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.686 15:01:50 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.686 15:01:50 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.686 15:01:50 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.686 15:01:50 json_config -- paths/export.sh@5 -- # export PATH 00:06:03.686 15:01:50 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.686 15:01:50 json_config -- nvmf/common.sh@51 -- # : 0 00:06:03.686 15:01:50 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:03.686 15:01:50 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:03.686 15:01:50 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:03.686 15:01:50 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:03.686 15:01:50 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:03.686 15:01:50 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:03.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:03.686 15:01:50 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:03.686 15:01:50 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:03.686 15:01:50 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:03.686 15:01:50 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:03.686 15:01:50 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:03.686 15:01:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:03.686 15:01:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:03.687 15:01:50 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:03.687 15:01:50 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:03.687 15:01:50 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:03.687 15:01:50 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:03.687 15:01:50 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:03.687 15:01:50 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:03.687 15:01:50 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:03.687 15:01:50 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:03.687 15:01:50 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:03.687 15:01:50 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:03.687 15:01:50 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:03.687 15:01:50 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:03.687 INFO: JSON configuration test init 00:06:03.687 15:01:50 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:03.687 15:01:50 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:03.687 15:01:50 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:03.687 15:01:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.687 15:01:50 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:03.687 15:01:50 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:03.687 15:01:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.687 15:01:50 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:03.687 15:01:50 json_config -- json_config/common.sh@9 -- # local app=target 00:06:03.687 15:01:50 json_config -- json_config/common.sh@10 -- # shift 00:06:03.687 15:01:50 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:03.687 15:01:50 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:03.687 15:01:50 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:03.687 15:01:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:03.687 15:01:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:03.687 15:01:50 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3041131 00:06:03.687 15:01:50 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:03.687 15:01:50 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:03.687 Waiting for target to run... 00:06:03.687 15:01:50 json_config -- json_config/common.sh@25 -- # waitforlisten 3041131 /var/tmp/spdk_tgt.sock 00:06:03.687 15:01:50 json_config -- common/autotest_common.sh@831 -- # '[' -z 3041131 ']' 00:06:03.687 15:01:50 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:03.687 15:01:50 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.687 15:01:50 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:03.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:03.687 15:01:50 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.687 15:01:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.947 [2024-10-28 15:01:50.575113] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:06:03.947 [2024-10-28 15:01:50.575282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3041131 ] 00:06:04.519 [2024-10-28 15:01:51.151012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.519 [2024-10-28 15:01:51.244197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.460 15:01:52 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.460 15:01:52 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:05.460 15:01:52 json_config -- json_config/common.sh@26 -- # echo '' 00:06:05.460 00:06:05.460 15:01:52 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:05.460 15:01:52 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:05.460 15:01:52 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:05.460 15:01:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.460 15:01:52 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:05.460 15:01:52 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:05.460 15:01:52 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:05.460 15:01:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.460 15:01:52 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:05.460 15:01:52 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:05.460 15:01:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:09.661 15:01:55 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:09.661 15:01:55 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:09.661 15:01:55 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:09.661 15:01:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.661 15:01:55 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:09.661 15:01:55 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:09.661 15:01:55 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:09.661 15:01:55 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:09.661 15:01:55 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:09.661 15:01:55 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:09.661 15:01:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:09.661 15:01:55 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:09.661 15:01:56 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:09.661 15:01:56 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:09.661 15:01:56 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:09.661 15:01:56 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:09.661 15:01:56 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:09.661 15:01:56 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:09.661 15:01:56 json_config -- json_config/json_config.sh@54 -- # sort 00:06:09.661 15:01:56 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:09.662 15:01:56 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:09.662 15:01:56 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:09.662 15:01:56 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:09.662 15:01:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.662 15:01:56 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:09.662 15:01:56 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:09.662 15:01:56 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:09.662 15:01:56 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:09.662 15:01:56 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:09.662 15:01:56 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:09.662 15:01:56 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:09.662 15:01:56 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:09.662 15:01:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.662 15:01:56 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:09.662 15:01:56 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:09.662 15:01:56 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:09.662 15:01:56 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:09.662 15:01:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:09.921 MallocForNvmf0 00:06:09.921 15:01:56 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:09.921 15:01:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:10.861 MallocForNvmf1 00:06:10.861 15:01:57 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:10.861 15:01:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:11.432 [2024-10-28 15:01:58.123439] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:11.432 15:01:58 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:11.432 15:01:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:12.002 15:01:58 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:12.002 15:01:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:12.941 15:01:59 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:12.941 15:01:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:13.511 15:02:00 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:13.511 15:02:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:14.081 [2024-10-28 15:02:00.825493] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:14.081 15:02:00 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:14.081 15:02:00 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:14.081 15:02:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.082 15:02:00 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:14.082 15:02:00 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:14.082 15:02:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.082 15:02:00 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:14.082 15:02:00 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:14.082 15:02:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:14.651 MallocBdevForConfigChangeCheck 00:06:14.651 15:02:01 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:14.651 15:02:01 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:14.651 15:02:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.651 15:02:01 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:14.651 15:02:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:15.222 15:02:02 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:15.222 INFO: shutting down applications... 00:06:15.222 15:02:02 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:15.223 15:02:02 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:15.223 15:02:02 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:15.223 15:02:02 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:17.131 Calling clear_iscsi_subsystem 00:06:17.131 Calling clear_nvmf_subsystem 00:06:17.131 Calling clear_nbd_subsystem 00:06:17.131 Calling clear_ublk_subsystem 00:06:17.131 Calling clear_vhost_blk_subsystem 00:06:17.131 Calling clear_vhost_scsi_subsystem 00:06:17.131 Calling clear_bdev_subsystem 00:06:17.131 15:02:03 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:17.131 15:02:03 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:17.131 15:02:03 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:17.131 15:02:03 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:17.131 15:02:03 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:17.131 15:02:03 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:18.070 15:02:04 json_config -- json_config/json_config.sh@352 -- # break 00:06:18.070 15:02:04 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:18.070 15:02:04 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:18.070 15:02:04 json_config -- json_config/common.sh@31 -- # local app=target 00:06:18.070 15:02:04 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:18.070 15:02:04 json_config -- json_config/common.sh@35 -- # [[ -n 3041131 ]] 00:06:18.070 15:02:04 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3041131 00:06:18.070 15:02:04 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:18.070 15:02:04 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:18.070 15:02:04 json_config -- json_config/common.sh@41 -- # kill -0 3041131 00:06:18.070 15:02:04 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:18.331 15:02:05 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:18.331 15:02:05 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:18.331 15:02:05 json_config -- json_config/common.sh@41 -- # kill -0 3041131 00:06:18.331 15:02:05 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:18.331 15:02:05 json_config -- json_config/common.sh@43 -- # break 00:06:18.331 15:02:05 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:18.331 15:02:05 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:18.331 SPDK target shutdown done 00:06:18.331 15:02:05 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:18.331 INFO: relaunching applications... 00:06:18.331 15:02:05 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:18.331 15:02:05 json_config -- json_config/common.sh@9 -- # local app=target 00:06:18.331 15:02:05 json_config -- json_config/common.sh@10 -- # shift 00:06:18.331 15:02:05 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:18.331 15:02:05 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:18.331 15:02:05 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:18.331 15:02:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:18.331 15:02:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:18.331 15:02:05 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3042974 00:06:18.331 15:02:05 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:18.331 15:02:05 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:18.331 Waiting for target to run... 00:06:18.331 15:02:05 json_config -- json_config/common.sh@25 -- # waitforlisten 3042974 /var/tmp/spdk_tgt.sock 00:06:18.331 15:02:05 json_config -- common/autotest_common.sh@831 -- # '[' -z 3042974 ']' 00:06:18.331 15:02:05 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:18.331 15:02:05 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.331 15:02:05 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:18.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:18.331 15:02:05 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.331 15:02:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.592 [2024-10-28 15:02:05.294054] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:06:18.592 [2024-10-28 15:02:05.294237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3042974 ] 00:06:19.164 [2024-10-28 15:02:05.983052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.425 [2024-10-28 15:02:06.077414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.720 [2024-10-28 15:02:09.242738] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:22.720 [2024-10-28 15:02:09.275558] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:22.720 15:02:09 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.720 15:02:09 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:22.720 15:02:09 json_config -- json_config/common.sh@26 -- # echo '' 00:06:22.720 00:06:22.720 15:02:09 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:22.720 15:02:09 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:22.720 INFO: Checking if target configuration is the same... 00:06:22.720 15:02:09 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:22.720 15:02:09 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:22.720 15:02:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:22.720 + '[' 2 -ne 2 ']' 00:06:22.720 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:22.720 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:22.720 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:22.720 +++ basename /dev/fd/62 00:06:22.720 ++ mktemp /tmp/62.XXX 00:06:22.720 + tmp_file_1=/tmp/62.c9A 00:06:22.720 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:22.720 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:22.720 + tmp_file_2=/tmp/spdk_tgt_config.json.4Gy 00:06:22.720 + ret=0 00:06:22.720 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:23.658 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:23.658 + diff -u /tmp/62.c9A /tmp/spdk_tgt_config.json.4Gy 00:06:23.658 + echo 'INFO: JSON config files are the same' 00:06:23.658 INFO: JSON config files are the same 00:06:23.658 + rm /tmp/62.c9A /tmp/spdk_tgt_config.json.4Gy 00:06:23.658 + exit 0 00:06:23.658 15:02:10 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:23.658 15:02:10 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:23.658 INFO: changing configuration and checking if this can be detected... 00:06:23.658 15:02:10 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:23.658 15:02:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:24.228 15:02:10 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:24.228 15:02:10 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:24.228 15:02:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:24.228 + '[' 2 -ne 2 ']' 00:06:24.228 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:24.228 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:24.228 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:24.228 +++ basename /dev/fd/62 00:06:24.228 ++ mktemp /tmp/62.XXX 00:06:24.228 + tmp_file_1=/tmp/62.bsC 00:06:24.228 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:24.228 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:24.228 + tmp_file_2=/tmp/spdk_tgt_config.json.gxz 00:06:24.228 + ret=0 00:06:24.228 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:25.168 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:25.169 + diff -u /tmp/62.bsC /tmp/spdk_tgt_config.json.gxz 00:06:25.169 + ret=1 00:06:25.169 + echo '=== Start of file: /tmp/62.bsC ===' 00:06:25.169 + cat /tmp/62.bsC 00:06:25.169 + echo '=== End of file: /tmp/62.bsC ===' 00:06:25.169 + echo '' 00:06:25.169 + echo '=== Start of file: /tmp/spdk_tgt_config.json.gxz ===' 00:06:25.169 + cat /tmp/spdk_tgt_config.json.gxz 00:06:25.169 + echo '=== End of file: /tmp/spdk_tgt_config.json.gxz ===' 00:06:25.169 + echo '' 00:06:25.169 + rm /tmp/62.bsC /tmp/spdk_tgt_config.json.gxz 00:06:25.169 + exit 1 00:06:25.169 15:02:11 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:25.169 INFO: configuration change detected. 00:06:25.169 15:02:11 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:25.169 15:02:11 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:25.169 15:02:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:25.169 15:02:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.169 15:02:11 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:25.169 15:02:11 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:25.169 15:02:11 json_config -- json_config/json_config.sh@324 -- # [[ -n 3042974 ]] 00:06:25.169 15:02:11 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:25.169 15:02:11 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:25.169 15:02:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:25.169 15:02:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.169 15:02:11 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:25.169 15:02:11 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:25.169 15:02:11 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:25.169 15:02:11 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:25.169 15:02:11 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:25.169 15:02:11 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:25.169 15:02:11 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:25.169 15:02:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.169 15:02:11 json_config -- json_config/json_config.sh@330 -- # killprocess 3042974 00:06:25.169 15:02:11 json_config -- common/autotest_common.sh@950 -- # '[' -z 3042974 ']' 00:06:25.169 15:02:11 json_config -- common/autotest_common.sh@954 -- # kill -0 3042974 00:06:25.169 15:02:11 json_config -- common/autotest_common.sh@955 -- # uname 00:06:25.169 15:02:11 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:25.169 15:02:11 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3042974 00:06:25.169 15:02:11 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:25.169 15:02:11 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:25.169 15:02:11 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3042974' 00:06:25.169 killing process with pid 3042974 00:06:25.169 15:02:11 json_config -- common/autotest_common.sh@969 -- # kill 3042974 00:06:25.169 15:02:11 json_config -- common/autotest_common.sh@974 -- # wait 3042974 00:06:27.114 15:02:13 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:27.114 15:02:13 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:27.114 15:02:13 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:27.114 15:02:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.114 15:02:13 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:27.114 15:02:13 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:27.114 INFO: Success 00:06:27.114 00:06:27.114 real 0m23.494s 00:06:27.114 user 0m30.634s 00:06:27.114 sys 0m3.986s 00:06:27.114 15:02:13 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.114 15:02:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.114 ************************************ 00:06:27.114 END TEST json_config 00:06:27.114 ************************************ 00:06:27.114 15:02:13 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:27.114 15:02:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.114 15:02:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.114 15:02:13 -- common/autotest_common.sh@10 -- # set +x 00:06:27.114 ************************************ 00:06:27.114 START TEST json_config_extra_key 00:06:27.114 ************************************ 00:06:27.114 15:02:13 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:27.114 15:02:13 json_config_extra_key -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:27.114 15:02:13 json_config_extra_key -- common/autotest_common.sh@1689 -- # lcov --version 00:06:27.114 15:02:13 json_config_extra_key -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:27.114 15:02:13 json_config_extra_key -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:27.115 15:02:13 json_config_extra_key -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.115 15:02:13 json_config_extra_key -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:27.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.115 --rc genhtml_branch_coverage=1 00:06:27.115 --rc genhtml_function_coverage=1 00:06:27.115 --rc genhtml_legend=1 00:06:27.115 --rc geninfo_all_blocks=1 00:06:27.115 --rc geninfo_unexecuted_blocks=1 00:06:27.115 00:06:27.115 ' 00:06:27.115 15:02:13 json_config_extra_key -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:27.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.115 --rc genhtml_branch_coverage=1 00:06:27.115 --rc genhtml_function_coverage=1 00:06:27.115 --rc genhtml_legend=1 00:06:27.115 --rc geninfo_all_blocks=1 00:06:27.115 --rc geninfo_unexecuted_blocks=1 00:06:27.115 00:06:27.115 ' 00:06:27.115 15:02:13 json_config_extra_key -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:27.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.115 --rc genhtml_branch_coverage=1 00:06:27.115 --rc genhtml_function_coverage=1 00:06:27.115 --rc genhtml_legend=1 00:06:27.115 --rc geninfo_all_blocks=1 00:06:27.115 --rc geninfo_unexecuted_blocks=1 00:06:27.115 00:06:27.115 ' 00:06:27.115 15:02:13 json_config_extra_key -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:27.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.115 --rc genhtml_branch_coverage=1 00:06:27.115 --rc genhtml_function_coverage=1 00:06:27.115 --rc genhtml_legend=1 00:06:27.115 --rc geninfo_all_blocks=1 00:06:27.115 --rc geninfo_unexecuted_blocks=1 00:06:27.115 00:06:27.115 ' 00:06:27.115 15:02:13 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:27.115 15:02:13 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:27.115 15:02:13 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:27.115 15:02:13 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:27.115 15:02:13 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:27.115 15:02:13 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:27.115 15:02:13 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:27.115 15:02:13 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:27.115 15:02:13 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:27.115 15:02:13 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:27.115 15:02:13 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:27.115 15:02:13 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:27.115 15:02:13 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:27.115 15:02:13 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:27.115 15:02:13 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:27.115 15:02:13 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:27.115 15:02:13 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:27.115 15:02:13 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:27.115 15:02:13 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.115 15:02:13 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.115 15:02:13 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.115 15:02:13 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.115 15:02:13 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.115 15:02:13 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:27.115 15:02:13 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.115 15:02:13 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:27.115 15:02:13 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:27.115 15:02:13 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:27.115 15:02:13 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:27.115 15:02:13 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:27.115 15:02:13 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:27.115 15:02:13 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:27.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:27.115 15:02:13 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:27.115 15:02:13 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:27.115 15:02:13 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:27.115 15:02:13 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:27.115 15:02:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:27.115 15:02:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:27.115 15:02:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:27.115 15:02:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:27.375 15:02:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:27.375 15:02:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:27.375 15:02:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:27.375 15:02:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:27.376 15:02:13 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:27.376 15:02:13 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:27.376 INFO: launching applications... 00:06:27.376 15:02:13 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:27.376 15:02:13 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:27.376 15:02:13 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:27.376 15:02:13 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:27.376 15:02:13 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:27.376 15:02:13 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:27.376 15:02:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:27.376 15:02:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:27.376 15:02:13 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3044110 00:06:27.376 15:02:13 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:27.376 15:02:13 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:27.376 Waiting for target to run... 00:06:27.376 15:02:13 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3044110 /var/tmp/spdk_tgt.sock 00:06:27.376 15:02:13 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 3044110 ']' 00:06:27.376 15:02:13 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:27.376 15:02:13 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.376 15:02:13 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:27.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:27.376 15:02:13 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.376 15:02:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:27.376 [2024-10-28 15:02:14.053974] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:06:27.376 [2024-10-28 15:02:14.054099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3044110 ] 00:06:27.946 [2024-10-28 15:02:14.730489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.206 [2024-10-28 15:02:14.830847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.775 15:02:15 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.775 15:02:15 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:28.775 15:02:15 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:28.775 00:06:28.775 15:02:15 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:28.775 INFO: shutting down applications... 00:06:28.775 15:02:15 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:28.775 15:02:15 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:28.775 15:02:15 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:28.775 15:02:15 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3044110 ]] 00:06:28.775 15:02:15 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3044110 00:06:28.775 15:02:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:28.775 15:02:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:28.775 15:02:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3044110 00:06:28.775 15:02:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:29.343 15:02:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:29.343 15:02:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:29.343 15:02:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3044110 00:06:29.343 15:02:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:29.914 15:02:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:29.914 15:02:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:29.914 15:02:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3044110 00:06:29.914 15:02:16 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:29.914 15:02:16 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:29.914 15:02:16 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:29.914 15:02:16 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:29.914 SPDK target shutdown done 00:06:29.914 15:02:16 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:29.914 Success 00:06:29.914 00:06:29.914 real 0m2.761s 00:06:29.914 user 0m2.516s 00:06:29.914 sys 0m0.849s 00:06:29.914 15:02:16 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.915 15:02:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:29.915 ************************************ 00:06:29.915 END TEST json_config_extra_key 00:06:29.915 ************************************ 00:06:29.915 15:02:16 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:29.915 15:02:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.915 15:02:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.915 15:02:16 -- common/autotest_common.sh@10 -- # set +x 00:06:29.915 ************************************ 00:06:29.915 START TEST alias_rpc 00:06:29.915 ************************************ 00:06:29.915 15:02:16 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:29.915 * Looking for test storage... 00:06:29.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:29.915 15:02:16 alias_rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:29.915 15:02:16 alias_rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:06:29.915 15:02:16 alias_rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:30.176 15:02:16 alias_rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:30.176 15:02:16 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.176 15:02:16 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.176 15:02:16 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.176 15:02:16 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.176 15:02:16 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.176 15:02:16 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.176 15:02:16 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.176 15:02:16 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.176 15:02:16 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.176 15:02:16 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.176 15:02:16 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.176 15:02:16 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:30.176 15:02:16 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:30.176 15:02:16 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.176 15:02:16 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.176 15:02:16 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:30.176 15:02:16 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:30.176 15:02:16 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.176 15:02:16 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:30.176 15:02:16 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.176 15:02:16 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:30.176 15:02:16 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:30.176 15:02:16 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.176 15:02:16 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:30.176 15:02:16 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.176 15:02:16 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.176 15:02:16 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.176 15:02:16 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:30.176 15:02:16 alias_rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.176 15:02:16 alias_rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:30.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.176 --rc genhtml_branch_coverage=1 00:06:30.176 --rc genhtml_function_coverage=1 00:06:30.176 --rc genhtml_legend=1 00:06:30.176 --rc geninfo_all_blocks=1 00:06:30.176 --rc geninfo_unexecuted_blocks=1 00:06:30.176 00:06:30.176 ' 00:06:30.176 15:02:16 alias_rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:30.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.176 --rc genhtml_branch_coverage=1 00:06:30.176 --rc genhtml_function_coverage=1 00:06:30.176 --rc genhtml_legend=1 00:06:30.176 --rc geninfo_all_blocks=1 00:06:30.176 --rc geninfo_unexecuted_blocks=1 00:06:30.176 00:06:30.176 ' 00:06:30.176 15:02:16 alias_rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:30.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.176 --rc genhtml_branch_coverage=1 00:06:30.176 --rc genhtml_function_coverage=1 00:06:30.176 --rc genhtml_legend=1 00:06:30.176 --rc geninfo_all_blocks=1 00:06:30.176 --rc geninfo_unexecuted_blocks=1 00:06:30.176 00:06:30.176 ' 00:06:30.176 15:02:16 alias_rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:30.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.176 --rc genhtml_branch_coverage=1 00:06:30.176 --rc genhtml_function_coverage=1 00:06:30.176 --rc genhtml_legend=1 00:06:30.176 --rc geninfo_all_blocks=1 00:06:30.176 --rc geninfo_unexecuted_blocks=1 00:06:30.176 00:06:30.176 ' 00:06:30.176 15:02:16 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:30.176 15:02:16 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3044492 00:06:30.176 15:02:16 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:30.176 15:02:16 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3044492 00:06:30.176 15:02:16 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 3044492 ']' 00:06:30.176 15:02:16 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.176 15:02:16 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:30.176 15:02:16 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.176 15:02:16 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:30.176 15:02:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.437 [2024-10-28 15:02:17.048053] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:06:30.437 [2024-10-28 15:02:17.048237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3044492 ] 00:06:30.437 [2024-10-28 15:02:17.215577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.698 [2024-10-28 15:02:17.336212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.985 15:02:17 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.985 15:02:17 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:30.985 15:02:17 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:31.924 15:02:18 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3044492 00:06:31.924 15:02:18 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 3044492 ']' 00:06:31.924 15:02:18 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 3044492 00:06:31.924 15:02:18 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:31.924 15:02:18 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:31.924 15:02:18 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3044492 00:06:31.924 15:02:18 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:31.924 15:02:18 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:31.924 15:02:18 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3044492' 00:06:31.924 killing process with pid 3044492 00:06:31.924 15:02:18 alias_rpc -- common/autotest_common.sh@969 -- # kill 3044492 00:06:31.924 15:02:18 alias_rpc -- common/autotest_common.sh@974 -- # wait 3044492 00:06:32.490 00:06:32.490 real 0m2.613s 00:06:32.490 user 0m3.122s 00:06:32.490 sys 0m0.841s 00:06:32.490 15:02:19 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.490 15:02:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.490 ************************************ 00:06:32.490 END TEST alias_rpc 00:06:32.490 ************************************ 00:06:32.490 15:02:19 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:32.490 15:02:19 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:32.490 15:02:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.490 15:02:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.490 15:02:19 -- common/autotest_common.sh@10 -- # set +x 00:06:32.490 ************************************ 00:06:32.490 START TEST spdkcli_tcp 00:06:32.490 ************************************ 00:06:32.490 15:02:19 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:32.750 * Looking for test storage... 00:06:32.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:32.751 15:02:19 spdkcli_tcp -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:32.751 15:02:19 spdkcli_tcp -- common/autotest_common.sh@1689 -- # lcov --version 00:06:32.751 15:02:19 spdkcli_tcp -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:32.751 15:02:19 spdkcli_tcp -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:32.751 15:02:19 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.751 15:02:19 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.751 15:02:19 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.751 15:02:19 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.751 15:02:19 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.751 15:02:19 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.751 15:02:19 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.751 15:02:19 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.751 15:02:19 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.751 15:02:19 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.751 15:02:19 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.751 15:02:19 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:32.751 15:02:19 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:32.751 15:02:19 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.751 15:02:19 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.012 15:02:19 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:33.012 15:02:19 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:33.012 15:02:19 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.012 15:02:19 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:33.012 15:02:19 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.012 15:02:19 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:33.012 15:02:19 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:33.012 15:02:19 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.012 15:02:19 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:33.012 15:02:19 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.012 15:02:19 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.012 15:02:19 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.012 15:02:19 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:33.012 15:02:19 spdkcli_tcp -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.012 15:02:19 spdkcli_tcp -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:33.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.012 --rc genhtml_branch_coverage=1 00:06:33.012 --rc genhtml_function_coverage=1 00:06:33.012 --rc genhtml_legend=1 00:06:33.012 --rc geninfo_all_blocks=1 00:06:33.012 --rc geninfo_unexecuted_blocks=1 00:06:33.012 00:06:33.012 ' 00:06:33.012 15:02:19 spdkcli_tcp -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:33.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.012 --rc genhtml_branch_coverage=1 00:06:33.012 --rc genhtml_function_coverage=1 00:06:33.012 --rc genhtml_legend=1 00:06:33.012 --rc geninfo_all_blocks=1 00:06:33.012 --rc geninfo_unexecuted_blocks=1 00:06:33.012 00:06:33.012 ' 00:06:33.012 15:02:19 spdkcli_tcp -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:33.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.012 --rc genhtml_branch_coverage=1 00:06:33.012 --rc genhtml_function_coverage=1 00:06:33.012 --rc genhtml_legend=1 00:06:33.012 --rc geninfo_all_blocks=1 00:06:33.012 --rc geninfo_unexecuted_blocks=1 00:06:33.012 00:06:33.012 ' 00:06:33.012 15:02:19 spdkcli_tcp -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:33.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.012 --rc genhtml_branch_coverage=1 00:06:33.012 --rc genhtml_function_coverage=1 00:06:33.012 --rc genhtml_legend=1 00:06:33.012 --rc geninfo_all_blocks=1 00:06:33.012 --rc geninfo_unexecuted_blocks=1 00:06:33.012 00:06:33.012 ' 00:06:33.012 15:02:19 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:33.012 15:02:19 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:33.012 15:02:19 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:33.012 15:02:19 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:33.012 15:02:19 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:33.012 15:02:19 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:33.012 15:02:19 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:33.012 15:02:19 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:33.012 15:02:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:33.012 15:02:19 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3044823 00:06:33.012 15:02:19 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:33.012 15:02:19 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3044823 00:06:33.012 15:02:19 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 3044823 ']' 00:06:33.012 15:02:19 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.012 15:02:19 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.012 15:02:19 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.012 15:02:19 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.012 15:02:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:33.012 [2024-10-28 15:02:19.693454] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:06:33.013 [2024-10-28 15:02:19.693571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3044823 ] 00:06:33.013 [2024-10-28 15:02:19.821749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.274 [2024-10-28 15:02:19.936001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.274 [2024-10-28 15:02:19.936006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.534 15:02:20 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:33.534 15:02:20 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:33.534 15:02:20 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3044948 00:06:33.534 15:02:20 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:33.534 15:02:20 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:34.104 [ 00:06:34.104 "bdev_malloc_delete", 00:06:34.104 "bdev_malloc_create", 00:06:34.104 "bdev_null_resize", 00:06:34.104 "bdev_null_delete", 00:06:34.104 "bdev_null_create", 00:06:34.104 "bdev_nvme_cuse_unregister", 00:06:34.104 "bdev_nvme_cuse_register", 00:06:34.104 "bdev_opal_new_user", 00:06:34.104 "bdev_opal_set_lock_state", 00:06:34.104 "bdev_opal_delete", 00:06:34.104 "bdev_opal_get_info", 00:06:34.104 "bdev_opal_create", 00:06:34.104 "bdev_nvme_opal_revert", 00:06:34.104 "bdev_nvme_opal_init", 00:06:34.104 "bdev_nvme_send_cmd", 00:06:34.104 "bdev_nvme_set_keys", 00:06:34.104 "bdev_nvme_get_path_iostat", 00:06:34.104 "bdev_nvme_get_mdns_discovery_info", 00:06:34.104 "bdev_nvme_stop_mdns_discovery", 00:06:34.104 "bdev_nvme_start_mdns_discovery", 00:06:34.104 "bdev_nvme_set_multipath_policy", 00:06:34.104 "bdev_nvme_set_preferred_path", 00:06:34.104 "bdev_nvme_get_io_paths", 00:06:34.104 "bdev_nvme_remove_error_injection", 00:06:34.104 "bdev_nvme_add_error_injection", 00:06:34.104 "bdev_nvme_get_discovery_info", 00:06:34.104 "bdev_nvme_stop_discovery", 00:06:34.104 "bdev_nvme_start_discovery", 00:06:34.104 "bdev_nvme_get_controller_health_info", 00:06:34.104 "bdev_nvme_disable_controller", 00:06:34.104 "bdev_nvme_enable_controller", 00:06:34.104 "bdev_nvme_reset_controller", 00:06:34.104 "bdev_nvme_get_transport_statistics", 00:06:34.104 "bdev_nvme_apply_firmware", 00:06:34.104 "bdev_nvme_detach_controller", 00:06:34.104 "bdev_nvme_get_controllers", 00:06:34.104 "bdev_nvme_attach_controller", 00:06:34.104 "bdev_nvme_set_hotplug", 00:06:34.104 "bdev_nvme_set_options", 00:06:34.104 "bdev_passthru_delete", 00:06:34.104 "bdev_passthru_create", 00:06:34.104 "bdev_lvol_set_parent_bdev", 00:06:34.104 "bdev_lvol_set_parent", 00:06:34.104 "bdev_lvol_check_shallow_copy", 00:06:34.104 "bdev_lvol_start_shallow_copy", 00:06:34.104 "bdev_lvol_grow_lvstore", 00:06:34.104 "bdev_lvol_get_lvols", 00:06:34.104 "bdev_lvol_get_lvstores", 00:06:34.104 "bdev_lvol_delete", 00:06:34.104 "bdev_lvol_set_read_only", 00:06:34.104 "bdev_lvol_resize", 00:06:34.104 "bdev_lvol_decouple_parent", 00:06:34.104 "bdev_lvol_inflate", 00:06:34.104 "bdev_lvol_rename", 00:06:34.104 "bdev_lvol_clone_bdev", 00:06:34.104 "bdev_lvol_clone", 00:06:34.104 "bdev_lvol_snapshot", 00:06:34.104 "bdev_lvol_create", 00:06:34.104 "bdev_lvol_delete_lvstore", 00:06:34.104 "bdev_lvol_rename_lvstore", 00:06:34.104 "bdev_lvol_create_lvstore", 00:06:34.104 "bdev_raid_set_options", 00:06:34.104 "bdev_raid_remove_base_bdev", 00:06:34.104 "bdev_raid_add_base_bdev", 00:06:34.104 "bdev_raid_delete", 00:06:34.104 "bdev_raid_create", 00:06:34.104 "bdev_raid_get_bdevs", 00:06:34.104 "bdev_error_inject_error", 00:06:34.104 "bdev_error_delete", 00:06:34.104 "bdev_error_create", 00:06:34.104 "bdev_split_delete", 00:06:34.104 "bdev_split_create", 00:06:34.104 "bdev_delay_delete", 00:06:34.104 "bdev_delay_create", 00:06:34.104 "bdev_delay_update_latency", 00:06:34.104 "bdev_zone_block_delete", 00:06:34.104 "bdev_zone_block_create", 00:06:34.104 "blobfs_create", 00:06:34.104 "blobfs_detect", 00:06:34.104 "blobfs_set_cache_size", 00:06:34.104 "bdev_aio_delete", 00:06:34.104 "bdev_aio_rescan", 00:06:34.104 "bdev_aio_create", 00:06:34.104 "bdev_ftl_set_property", 00:06:34.104 "bdev_ftl_get_properties", 00:06:34.104 "bdev_ftl_get_stats", 00:06:34.104 "bdev_ftl_unmap", 00:06:34.104 "bdev_ftl_unload", 00:06:34.104 "bdev_ftl_delete", 00:06:34.104 "bdev_ftl_load", 00:06:34.104 "bdev_ftl_create", 00:06:34.104 "bdev_virtio_attach_controller", 00:06:34.104 "bdev_virtio_scsi_get_devices", 00:06:34.104 "bdev_virtio_detach_controller", 00:06:34.104 "bdev_virtio_blk_set_hotplug", 00:06:34.104 "bdev_iscsi_delete", 00:06:34.104 "bdev_iscsi_create", 00:06:34.104 "bdev_iscsi_set_options", 00:06:34.104 "accel_error_inject_error", 00:06:34.104 "ioat_scan_accel_module", 00:06:34.104 "dsa_scan_accel_module", 00:06:34.104 "iaa_scan_accel_module", 00:06:34.104 "vfu_virtio_create_fs_endpoint", 00:06:34.104 "vfu_virtio_create_scsi_endpoint", 00:06:34.104 "vfu_virtio_scsi_remove_target", 00:06:34.104 "vfu_virtio_scsi_add_target", 00:06:34.104 "vfu_virtio_create_blk_endpoint", 00:06:34.104 "vfu_virtio_delete_endpoint", 00:06:34.104 "keyring_file_remove_key", 00:06:34.104 "keyring_file_add_key", 00:06:34.104 "keyring_linux_set_options", 00:06:34.104 "fsdev_aio_delete", 00:06:34.104 "fsdev_aio_create", 00:06:34.104 "iscsi_get_histogram", 00:06:34.104 "iscsi_enable_histogram", 00:06:34.104 "iscsi_set_options", 00:06:34.104 "iscsi_get_auth_groups", 00:06:34.104 "iscsi_auth_group_remove_secret", 00:06:34.104 "iscsi_auth_group_add_secret", 00:06:34.104 "iscsi_delete_auth_group", 00:06:34.104 "iscsi_create_auth_group", 00:06:34.104 "iscsi_set_discovery_auth", 00:06:34.104 "iscsi_get_options", 00:06:34.104 "iscsi_target_node_request_logout", 00:06:34.104 "iscsi_target_node_set_redirect", 00:06:34.104 "iscsi_target_node_set_auth", 00:06:34.104 "iscsi_target_node_add_lun", 00:06:34.104 "iscsi_get_stats", 00:06:34.104 "iscsi_get_connections", 00:06:34.104 "iscsi_portal_group_set_auth", 00:06:34.104 "iscsi_start_portal_group", 00:06:34.104 "iscsi_delete_portal_group", 00:06:34.104 "iscsi_create_portal_group", 00:06:34.104 "iscsi_get_portal_groups", 00:06:34.104 "iscsi_delete_target_node", 00:06:34.104 "iscsi_target_node_remove_pg_ig_maps", 00:06:34.104 "iscsi_target_node_add_pg_ig_maps", 00:06:34.104 "iscsi_create_target_node", 00:06:34.104 "iscsi_get_target_nodes", 00:06:34.104 "iscsi_delete_initiator_group", 00:06:34.104 "iscsi_initiator_group_remove_initiators", 00:06:34.104 "iscsi_initiator_group_add_initiators", 00:06:34.104 "iscsi_create_initiator_group", 00:06:34.104 "iscsi_get_initiator_groups", 00:06:34.104 "nvmf_set_crdt", 00:06:34.104 "nvmf_set_config", 00:06:34.104 "nvmf_set_max_subsystems", 00:06:34.104 "nvmf_stop_mdns_prr", 00:06:34.104 "nvmf_publish_mdns_prr", 00:06:34.104 "nvmf_subsystem_get_listeners", 00:06:34.104 "nvmf_subsystem_get_qpairs", 00:06:34.104 "nvmf_subsystem_get_controllers", 00:06:34.104 "nvmf_get_stats", 00:06:34.104 "nvmf_get_transports", 00:06:34.104 "nvmf_create_transport", 00:06:34.104 "nvmf_get_targets", 00:06:34.104 "nvmf_delete_target", 00:06:34.104 "nvmf_create_target", 00:06:34.104 "nvmf_subsystem_allow_any_host", 00:06:34.104 "nvmf_subsystem_set_keys", 00:06:34.104 "nvmf_subsystem_remove_host", 00:06:34.104 "nvmf_subsystem_add_host", 00:06:34.104 "nvmf_ns_remove_host", 00:06:34.104 "nvmf_ns_add_host", 00:06:34.104 "nvmf_subsystem_remove_ns", 00:06:34.104 "nvmf_subsystem_set_ns_ana_group", 00:06:34.104 "nvmf_subsystem_add_ns", 00:06:34.104 "nvmf_subsystem_listener_set_ana_state", 00:06:34.104 "nvmf_discovery_get_referrals", 00:06:34.104 "nvmf_discovery_remove_referral", 00:06:34.104 "nvmf_discovery_add_referral", 00:06:34.104 "nvmf_subsystem_remove_listener", 00:06:34.104 "nvmf_subsystem_add_listener", 00:06:34.104 "nvmf_delete_subsystem", 00:06:34.104 "nvmf_create_subsystem", 00:06:34.104 "nvmf_get_subsystems", 00:06:34.104 "env_dpdk_get_mem_stats", 00:06:34.104 "nbd_get_disks", 00:06:34.104 "nbd_stop_disk", 00:06:34.104 "nbd_start_disk", 00:06:34.104 "ublk_recover_disk", 00:06:34.104 "ublk_get_disks", 00:06:34.104 "ublk_stop_disk", 00:06:34.104 "ublk_start_disk", 00:06:34.104 "ublk_destroy_target", 00:06:34.104 "ublk_create_target", 00:06:34.104 "virtio_blk_create_transport", 00:06:34.104 "virtio_blk_get_transports", 00:06:34.104 "vhost_controller_set_coalescing", 00:06:34.104 "vhost_get_controllers", 00:06:34.104 "vhost_delete_controller", 00:06:34.104 "vhost_create_blk_controller", 00:06:34.104 "vhost_scsi_controller_remove_target", 00:06:34.104 "vhost_scsi_controller_add_target", 00:06:34.104 "vhost_start_scsi_controller", 00:06:34.104 "vhost_create_scsi_controller", 00:06:34.104 "thread_set_cpumask", 00:06:34.104 "scheduler_set_options", 00:06:34.104 "framework_get_governor", 00:06:34.104 "framework_get_scheduler", 00:06:34.104 "framework_set_scheduler", 00:06:34.104 "framework_get_reactors", 00:06:34.104 "thread_get_io_channels", 00:06:34.104 "thread_get_pollers", 00:06:34.104 "thread_get_stats", 00:06:34.104 "framework_monitor_context_switch", 00:06:34.104 "spdk_kill_instance", 00:06:34.104 "log_enable_timestamps", 00:06:34.104 "log_get_flags", 00:06:34.104 "log_clear_flag", 00:06:34.104 "log_set_flag", 00:06:34.104 "log_get_level", 00:06:34.104 "log_set_level", 00:06:34.104 "log_get_print_level", 00:06:34.104 "log_set_print_level", 00:06:34.104 "framework_enable_cpumask_locks", 00:06:34.104 "framework_disable_cpumask_locks", 00:06:34.104 "framework_wait_init", 00:06:34.104 "framework_start_init", 00:06:34.104 "scsi_get_devices", 00:06:34.104 "bdev_get_histogram", 00:06:34.104 "bdev_enable_histogram", 00:06:34.104 "bdev_set_qos_limit", 00:06:34.104 "bdev_set_qd_sampling_period", 00:06:34.104 "bdev_get_bdevs", 00:06:34.104 "bdev_reset_iostat", 00:06:34.104 "bdev_get_iostat", 00:06:34.104 "bdev_examine", 00:06:34.104 "bdev_wait_for_examine", 00:06:34.104 "bdev_set_options", 00:06:34.104 "accel_get_stats", 00:06:34.104 "accel_set_options", 00:06:34.104 "accel_set_driver", 00:06:34.104 "accel_crypto_key_destroy", 00:06:34.104 "accel_crypto_keys_get", 00:06:34.104 "accel_crypto_key_create", 00:06:34.104 "accel_assign_opc", 00:06:34.104 "accel_get_module_info", 00:06:34.104 "accel_get_opc_assignments", 00:06:34.104 "vmd_rescan", 00:06:34.105 "vmd_remove_device", 00:06:34.105 "vmd_enable", 00:06:34.105 "sock_get_default_impl", 00:06:34.105 "sock_set_default_impl", 00:06:34.105 "sock_impl_set_options", 00:06:34.105 "sock_impl_get_options", 00:06:34.105 "iobuf_get_stats", 00:06:34.105 "iobuf_set_options", 00:06:34.105 "keyring_get_keys", 00:06:34.105 "vfu_tgt_set_base_path", 00:06:34.105 "framework_get_pci_devices", 00:06:34.105 "framework_get_config", 00:06:34.105 "framework_get_subsystems", 00:06:34.105 "fsdev_set_opts", 00:06:34.105 "fsdev_get_opts", 00:06:34.105 "trace_get_info", 00:06:34.105 "trace_get_tpoint_group_mask", 00:06:34.105 "trace_disable_tpoint_group", 00:06:34.105 "trace_enable_tpoint_group", 00:06:34.105 "trace_clear_tpoint_mask", 00:06:34.105 "trace_set_tpoint_mask", 00:06:34.105 "notify_get_notifications", 00:06:34.105 "notify_get_types", 00:06:34.105 "spdk_get_version", 00:06:34.105 "rpc_get_methods" 00:06:34.105 ] 00:06:34.105 15:02:20 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:34.105 15:02:20 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:34.105 15:02:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:34.105 15:02:20 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:34.105 15:02:20 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3044823 00:06:34.105 15:02:20 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 3044823 ']' 00:06:34.105 15:02:20 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 3044823 00:06:34.105 15:02:20 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:34.105 15:02:20 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:34.105 15:02:20 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3044823 00:06:34.361 15:02:21 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:34.361 15:02:21 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:34.361 15:02:21 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3044823' 00:06:34.361 killing process with pid 3044823 00:06:34.361 15:02:21 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 3044823 00:06:34.361 15:02:21 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 3044823 00:06:34.931 00:06:34.931 real 0m2.299s 00:06:34.931 user 0m4.186s 00:06:34.931 sys 0m0.776s 00:06:34.931 15:02:21 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.931 15:02:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:34.931 ************************************ 00:06:34.931 END TEST spdkcli_tcp 00:06:34.931 ************************************ 00:06:34.931 15:02:21 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:34.931 15:02:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.931 15:02:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.931 15:02:21 -- common/autotest_common.sh@10 -- # set +x 00:06:34.931 ************************************ 00:06:34.931 START TEST dpdk_mem_utility 00:06:34.931 ************************************ 00:06:34.931 15:02:21 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:34.931 * Looking for test storage... 00:06:34.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:34.931 15:02:21 dpdk_mem_utility -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:34.931 15:02:21 dpdk_mem_utility -- common/autotest_common.sh@1689 -- # lcov --version 00:06:34.931 15:02:21 dpdk_mem_utility -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:35.191 15:02:21 dpdk_mem_utility -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:35.191 15:02:21 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.191 15:02:21 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.191 15:02:21 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.191 15:02:21 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.191 15:02:21 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.191 15:02:21 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.191 15:02:21 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.191 15:02:21 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.191 15:02:21 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.191 15:02:21 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.191 15:02:21 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.191 15:02:21 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:35.191 15:02:21 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:35.191 15:02:21 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.191 15:02:21 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.191 15:02:21 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:35.191 15:02:21 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:35.191 15:02:21 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.191 15:02:21 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:35.191 15:02:21 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.191 15:02:21 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:35.191 15:02:21 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:35.191 15:02:21 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.191 15:02:21 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:35.191 15:02:21 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.191 15:02:21 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.191 15:02:21 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.191 15:02:21 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:35.191 15:02:21 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.191 15:02:21 dpdk_mem_utility -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:35.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.191 --rc genhtml_branch_coverage=1 00:06:35.191 --rc genhtml_function_coverage=1 00:06:35.191 --rc genhtml_legend=1 00:06:35.191 --rc geninfo_all_blocks=1 00:06:35.191 --rc geninfo_unexecuted_blocks=1 00:06:35.191 00:06:35.191 ' 00:06:35.191 15:02:21 dpdk_mem_utility -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:35.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.191 --rc genhtml_branch_coverage=1 00:06:35.191 --rc genhtml_function_coverage=1 00:06:35.191 --rc genhtml_legend=1 00:06:35.191 --rc geninfo_all_blocks=1 00:06:35.191 --rc geninfo_unexecuted_blocks=1 00:06:35.191 00:06:35.191 ' 00:06:35.191 15:02:21 dpdk_mem_utility -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:35.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.191 --rc genhtml_branch_coverage=1 00:06:35.191 --rc genhtml_function_coverage=1 00:06:35.191 --rc genhtml_legend=1 00:06:35.191 --rc geninfo_all_blocks=1 00:06:35.191 --rc geninfo_unexecuted_blocks=1 00:06:35.191 00:06:35.191 ' 00:06:35.191 15:02:21 dpdk_mem_utility -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:35.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.191 --rc genhtml_branch_coverage=1 00:06:35.191 --rc genhtml_function_coverage=1 00:06:35.191 --rc genhtml_legend=1 00:06:35.191 --rc geninfo_all_blocks=1 00:06:35.191 --rc geninfo_unexecuted_blocks=1 00:06:35.191 00:06:35.191 ' 00:06:35.191 15:02:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:35.191 15:02:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3045160 00:06:35.191 15:02:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:35.191 15:02:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3045160 00:06:35.191 15:02:21 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 3045160 ']' 00:06:35.191 15:02:21 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.191 15:02:21 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.191 15:02:21 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.191 15:02:21 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.191 15:02:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:35.191 [2024-10-28 15:02:21.981800] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:06:35.191 [2024-10-28 15:02:21.981936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3045160 ] 00:06:35.452 [2024-10-28 15:02:22.107191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.452 [2024-10-28 15:02:22.184038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.023 15:02:22 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.023 15:02:22 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:36.023 15:02:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:36.023 15:02:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:36.023 15:02:22 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.023 15:02:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:36.023 { 00:06:36.023 "filename": "/tmp/spdk_mem_dump.txt" 00:06:36.023 } 00:06:36.023 15:02:22 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.024 15:02:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:36.024 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:36.024 1 heaps totaling size 810.000000 MiB 00:06:36.024 size: 810.000000 MiB heap id: 0 00:06:36.024 end heaps---------- 00:06:36.024 9 mempools totaling size 595.772034 MiB 00:06:36.024 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:36.024 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:36.024 size: 92.545471 MiB name: bdev_io_3045160 00:06:36.024 size: 50.003479 MiB name: msgpool_3045160 00:06:36.024 size: 36.509338 MiB name: fsdev_io_3045160 00:06:36.024 size: 21.763794 MiB name: PDU_Pool 00:06:36.024 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:36.024 size: 4.133484 MiB name: evtpool_3045160 00:06:36.024 size: 0.026123 MiB name: Session_Pool 00:06:36.024 end mempools------- 00:06:36.024 6 memzones totaling size 4.142822 MiB 00:06:36.024 size: 1.000366 MiB name: RG_ring_0_3045160 00:06:36.024 size: 1.000366 MiB name: RG_ring_1_3045160 00:06:36.024 size: 1.000366 MiB name: RG_ring_4_3045160 00:06:36.024 size: 1.000366 MiB name: RG_ring_5_3045160 00:06:36.024 size: 0.125366 MiB name: RG_ring_2_3045160 00:06:36.024 size: 0.015991 MiB name: RG_ring_3_3045160 00:06:36.024 end memzones------- 00:06:36.024 15:02:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:36.285 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:36.285 list of free elements. size: 10.862488 MiB 00:06:36.285 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:36.285 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:36.285 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:36.285 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:36.285 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:36.285 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:36.285 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:36.285 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:36.285 element at address: 0x20001a600000 with size: 0.582886 MiB 00:06:36.285 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:36.285 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:36.285 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:36.285 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:36.285 element at address: 0x200027a00000 with size: 0.410034 MiB 00:06:36.285 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:36.285 list of standard malloc elements. size: 199.218628 MiB 00:06:36.285 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:36.285 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:36.285 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:36.285 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:36.285 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:36.285 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:36.285 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:36.285 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:36.285 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:36.285 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:36.285 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:36.285 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:36.285 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:36.285 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:36.285 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:36.285 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:36.285 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:36.285 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:36.285 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:36.285 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:36.285 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:36.285 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:36.285 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:36.285 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:36.285 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:36.285 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:36.285 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:36.285 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:36.285 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:36.285 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:36.285 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:36.285 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:36.285 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:36.285 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:36.285 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:36.285 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:36.285 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:36.285 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:36.285 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:36.285 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:06:36.285 element at address: 0x200027a69040 with size: 0.000183 MiB 00:06:36.285 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:06:36.285 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:36.285 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:36.285 list of memzone associated elements. size: 599.918884 MiB 00:06:36.285 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:36.285 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:36.285 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:36.285 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:36.285 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:36.285 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3045160_0 00:06:36.285 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:36.285 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3045160_0 00:06:36.285 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:36.285 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3045160_0 00:06:36.285 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:36.285 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:36.285 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:36.285 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:36.285 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:36.285 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3045160_0 00:06:36.285 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:36.285 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3045160 00:06:36.285 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:36.285 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3045160 00:06:36.285 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:36.285 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:36.285 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:36.285 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:36.285 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:36.285 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:36.285 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:36.285 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:36.285 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:36.285 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3045160 00:06:36.285 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:36.285 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3045160 00:06:36.285 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:36.285 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3045160 00:06:36.285 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:36.285 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3045160 00:06:36.285 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:36.285 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3045160 00:06:36.285 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:36.285 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3045160 00:06:36.285 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:36.285 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:36.285 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:36.285 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:36.285 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:36.286 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:36.286 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:36.286 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3045160 00:06:36.286 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:36.286 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3045160 00:06:36.286 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:36.286 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:36.286 element at address: 0x200027a69100 with size: 0.023743 MiB 00:06:36.286 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:36.286 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:36.286 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3045160 00:06:36.286 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:06:36.286 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:36.286 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:36.286 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3045160 00:06:36.286 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:36.286 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3045160 00:06:36.286 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:36.286 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3045160 00:06:36.286 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:06:36.286 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:36.286 15:02:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:36.286 15:02:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3045160 00:06:36.286 15:02:22 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 3045160 ']' 00:06:36.286 15:02:22 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 3045160 00:06:36.286 15:02:22 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:36.286 15:02:22 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:36.286 15:02:22 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3045160 00:06:36.286 15:02:23 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:36.286 15:02:23 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:36.286 15:02:23 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3045160' 00:06:36.286 killing process with pid 3045160 00:06:36.286 15:02:23 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 3045160 00:06:36.286 15:02:23 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 3045160 00:06:36.858 00:06:36.858 real 0m1.902s 00:06:36.858 user 0m1.914s 00:06:36.858 sys 0m0.720s 00:06:36.858 15:02:23 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.858 15:02:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:36.859 ************************************ 00:06:36.859 END TEST dpdk_mem_utility 00:06:36.859 ************************************ 00:06:36.859 15:02:23 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:36.859 15:02:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.859 15:02:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.859 15:02:23 -- common/autotest_common.sh@10 -- # set +x 00:06:36.859 ************************************ 00:06:36.859 START TEST event 00:06:36.859 ************************************ 00:06:36.859 15:02:23 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:36.859 * Looking for test storage... 00:06:36.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:36.859 15:02:23 event -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:36.859 15:02:23 event -- common/autotest_common.sh@1689 -- # lcov --version 00:06:36.859 15:02:23 event -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:37.116 15:02:23 event -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:37.116 15:02:23 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.116 15:02:23 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.116 15:02:23 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.116 15:02:23 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.116 15:02:23 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.116 15:02:23 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.116 15:02:23 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.116 15:02:23 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.116 15:02:23 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.116 15:02:23 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.116 15:02:23 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.116 15:02:23 event -- scripts/common.sh@344 -- # case "$op" in 00:06:37.116 15:02:23 event -- scripts/common.sh@345 -- # : 1 00:06:37.116 15:02:23 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.116 15:02:23 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.116 15:02:23 event -- scripts/common.sh@365 -- # decimal 1 00:06:37.116 15:02:23 event -- scripts/common.sh@353 -- # local d=1 00:06:37.116 15:02:23 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.116 15:02:23 event -- scripts/common.sh@355 -- # echo 1 00:06:37.116 15:02:23 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.116 15:02:23 event -- scripts/common.sh@366 -- # decimal 2 00:06:37.116 15:02:23 event -- scripts/common.sh@353 -- # local d=2 00:06:37.116 15:02:23 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.116 15:02:23 event -- scripts/common.sh@355 -- # echo 2 00:06:37.116 15:02:23 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.116 15:02:23 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.116 15:02:23 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.116 15:02:23 event -- scripts/common.sh@368 -- # return 0 00:06:37.116 15:02:23 event -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.116 15:02:23 event -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:37.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.116 --rc genhtml_branch_coverage=1 00:06:37.116 --rc genhtml_function_coverage=1 00:06:37.116 --rc genhtml_legend=1 00:06:37.116 --rc geninfo_all_blocks=1 00:06:37.116 --rc geninfo_unexecuted_blocks=1 00:06:37.116 00:06:37.116 ' 00:06:37.116 15:02:23 event -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:37.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.116 --rc genhtml_branch_coverage=1 00:06:37.116 --rc genhtml_function_coverage=1 00:06:37.116 --rc genhtml_legend=1 00:06:37.116 --rc geninfo_all_blocks=1 00:06:37.116 --rc geninfo_unexecuted_blocks=1 00:06:37.116 00:06:37.116 ' 00:06:37.116 15:02:23 event -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:37.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.116 --rc genhtml_branch_coverage=1 00:06:37.116 --rc genhtml_function_coverage=1 00:06:37.116 --rc genhtml_legend=1 00:06:37.116 --rc geninfo_all_blocks=1 00:06:37.116 --rc geninfo_unexecuted_blocks=1 00:06:37.116 00:06:37.116 ' 00:06:37.116 15:02:23 event -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:37.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.116 --rc genhtml_branch_coverage=1 00:06:37.116 --rc genhtml_function_coverage=1 00:06:37.116 --rc genhtml_legend=1 00:06:37.116 --rc geninfo_all_blocks=1 00:06:37.116 --rc geninfo_unexecuted_blocks=1 00:06:37.116 00:06:37.116 ' 00:06:37.116 15:02:23 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:37.116 15:02:23 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:37.116 15:02:23 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:37.116 15:02:23 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:37.116 15:02:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.116 15:02:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.116 ************************************ 00:06:37.116 START TEST event_perf 00:06:37.116 ************************************ 00:06:37.116 15:02:23 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:37.384 Running I/O for 1 seconds...[2024-10-28 15:02:23.988384] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:06:37.384 [2024-10-28 15:02:23.988530] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3045481 ] 00:06:37.384 [2024-10-28 15:02:24.155425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:37.642 [2024-10-28 15:02:24.291277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.642 [2024-10-28 15:02:24.291379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.642 [2024-10-28 15:02:24.291472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.642 [2024-10-28 15:02:24.291475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.572 Running I/O for 1 seconds... 00:06:38.572 lcore 0: 216176 00:06:38.572 lcore 1: 216176 00:06:38.572 lcore 2: 216174 00:06:38.572 lcore 3: 216175 00:06:38.572 done. 00:06:38.572 00:06:38.572 real 0m1.440s 00:06:38.572 user 0m4.281s 00:06:38.572 sys 0m0.152s 00:06:38.572 15:02:25 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.572 15:02:25 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:38.572 ************************************ 00:06:38.572 END TEST event_perf 00:06:38.572 ************************************ 00:06:38.573 15:02:25 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:38.573 15:02:25 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:38.573 15:02:25 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.573 15:02:25 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.833 ************************************ 00:06:38.833 START TEST event_reactor 00:06:38.833 ************************************ 00:06:38.833 15:02:25 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:38.833 [2024-10-28 15:02:25.507342] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:06:38.833 [2024-10-28 15:02:25.507492] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3045653 ] 00:06:38.833 [2024-10-28 15:02:25.670177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.092 [2024-10-28 15:02:25.791775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.473 test_start 00:06:40.473 oneshot 00:06:40.473 tick 100 00:06:40.473 tick 100 00:06:40.473 tick 250 00:06:40.473 tick 100 00:06:40.473 tick 100 00:06:40.473 tick 100 00:06:40.473 tick 250 00:06:40.473 tick 500 00:06:40.473 tick 100 00:06:40.473 tick 100 00:06:40.473 tick 250 00:06:40.473 tick 100 00:06:40.473 tick 100 00:06:40.473 test_end 00:06:40.473 00:06:40.473 real 0m1.429s 00:06:40.473 user 0m1.261s 00:06:40.473 sys 0m0.157s 00:06:40.473 15:02:26 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.473 15:02:26 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:40.473 ************************************ 00:06:40.473 END TEST event_reactor 00:06:40.473 ************************************ 00:06:40.473 15:02:26 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:40.473 15:02:26 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:40.473 15:02:26 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.473 15:02:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:40.473 ************************************ 00:06:40.473 START TEST event_reactor_perf 00:06:40.473 ************************************ 00:06:40.473 15:02:26 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:40.473 [2024-10-28 15:02:27.003158] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:06:40.473 [2024-10-28 15:02:27.003301] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3045921 ] 00:06:40.473 [2024-10-28 15:02:27.168114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.473 [2024-10-28 15:02:27.287503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.854 test_start 00:06:41.854 test_end 00:06:41.854 Performance: 159421 events per second 00:06:41.854 00:06:41.854 real 0m1.429s 00:06:41.854 user 0m1.276s 00:06:41.854 sys 0m0.141s 00:06:41.854 15:02:28 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.854 15:02:28 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:41.854 ************************************ 00:06:41.854 END TEST event_reactor_perf 00:06:41.854 ************************************ 00:06:41.854 15:02:28 event -- event/event.sh@49 -- # uname -s 00:06:41.854 15:02:28 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:41.854 15:02:28 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:41.854 15:02:28 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.854 15:02:28 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.854 15:02:28 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.854 ************************************ 00:06:41.854 START TEST event_scheduler 00:06:41.854 ************************************ 00:06:41.854 15:02:28 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:41.854 * Looking for test storage... 00:06:41.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:41.854 15:02:28 event.event_scheduler -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:41.854 15:02:28 event.event_scheduler -- common/autotest_common.sh@1689 -- # lcov --version 00:06:41.854 15:02:28 event.event_scheduler -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:42.114 15:02:28 event.event_scheduler -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:42.114 15:02:28 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.114 15:02:28 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.114 15:02:28 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.114 15:02:28 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.114 15:02:28 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.114 15:02:28 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.114 15:02:28 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.114 15:02:28 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.114 15:02:28 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.114 15:02:28 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.114 15:02:28 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.114 15:02:28 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:42.114 15:02:28 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:42.114 15:02:28 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.114 15:02:28 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.114 15:02:28 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:42.114 15:02:28 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:42.114 15:02:28 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.114 15:02:28 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:42.114 15:02:28 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.114 15:02:28 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:42.114 15:02:28 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:42.114 15:02:28 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.114 15:02:28 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:42.114 15:02:28 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.114 15:02:28 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.114 15:02:28 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.114 15:02:28 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:42.114 15:02:28 event.event_scheduler -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.114 15:02:28 event.event_scheduler -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:42.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.114 --rc genhtml_branch_coverage=1 00:06:42.114 --rc genhtml_function_coverage=1 00:06:42.114 --rc genhtml_legend=1 00:06:42.114 --rc geninfo_all_blocks=1 00:06:42.114 --rc geninfo_unexecuted_blocks=1 00:06:42.114 00:06:42.114 ' 00:06:42.114 15:02:28 event.event_scheduler -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:42.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.114 --rc genhtml_branch_coverage=1 00:06:42.114 --rc genhtml_function_coverage=1 00:06:42.114 --rc genhtml_legend=1 00:06:42.114 --rc geninfo_all_blocks=1 00:06:42.114 --rc geninfo_unexecuted_blocks=1 00:06:42.114 00:06:42.114 ' 00:06:42.114 15:02:28 event.event_scheduler -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:42.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.114 --rc genhtml_branch_coverage=1 00:06:42.114 --rc genhtml_function_coverage=1 00:06:42.114 --rc genhtml_legend=1 00:06:42.114 --rc geninfo_all_blocks=1 00:06:42.114 --rc geninfo_unexecuted_blocks=1 00:06:42.114 00:06:42.114 ' 00:06:42.114 15:02:28 event.event_scheduler -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:42.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.114 --rc genhtml_branch_coverage=1 00:06:42.114 --rc genhtml_function_coverage=1 00:06:42.114 --rc genhtml_legend=1 00:06:42.114 --rc geninfo_all_blocks=1 00:06:42.114 --rc geninfo_unexecuted_blocks=1 00:06:42.114 00:06:42.114 ' 00:06:42.114 15:02:28 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:42.114 15:02:28 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3046115 00:06:42.114 15:02:28 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:42.114 15:02:28 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:42.114 15:02:28 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3046115 00:06:42.114 15:02:28 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 3046115 ']' 00:06:42.114 15:02:28 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.114 15:02:28 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.114 15:02:28 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.114 15:02:28 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.114 15:02:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:42.114 [2024-10-28 15:02:28.821265] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:06:42.114 [2024-10-28 15:02:28.821401] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3046115 ] 00:06:42.114 [2024-10-28 15:02:28.916885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.374 [2024-10-28 15:02:28.999699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.374 [2024-10-28 15:02:28.999736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.374 [2024-10-28 15:02:28.999808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.374 [2024-10-28 15:02:28.999812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.375 15:02:29 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.375 15:02:29 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:42.375 15:02:29 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:42.375 15:02:29 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.375 15:02:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:42.375 [2024-10-28 15:02:29.141225] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:42.375 [2024-10-28 15:02:29.141255] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:42.375 [2024-10-28 15:02:29.141275] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:42.375 [2024-10-28 15:02:29.141288] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:42.375 [2024-10-28 15:02:29.141299] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:42.375 15:02:29 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.375 15:02:29 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:42.375 15:02:29 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.375 15:02:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:42.638 [2024-10-28 15:02:29.322548] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:42.638 15:02:29 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.638 15:02:29 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:42.638 15:02:29 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.638 15:02:29 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.638 15:02:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:42.638 ************************************ 00:06:42.638 START TEST scheduler_create_thread 00:06:42.638 ************************************ 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.638 2 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.638 3 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.638 4 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.638 5 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.638 6 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.638 7 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.638 8 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.638 9 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.638 10 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.638 15:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.019 15:02:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.019 00:06:44.019 real 0m1.177s 00:06:44.019 user 0m0.011s 00:06:44.019 sys 0m0.005s 00:06:44.019 15:02:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.019 15:02:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.019 ************************************ 00:06:44.019 END TEST scheduler_create_thread 00:06:44.019 ************************************ 00:06:44.019 15:02:30 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:44.019 15:02:30 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3046115 00:06:44.019 15:02:30 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 3046115 ']' 00:06:44.019 15:02:30 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 3046115 00:06:44.019 15:02:30 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:44.019 15:02:30 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.019 15:02:30 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3046115 00:06:44.019 15:02:30 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:44.019 15:02:30 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:44.019 15:02:30 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3046115' 00:06:44.019 killing process with pid 3046115 00:06:44.019 15:02:30 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 3046115 00:06:44.019 15:02:30 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 3046115 00:06:44.278 [2024-10-28 15:02:31.026166] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:44.538 00:06:44.538 real 0m2.878s 00:06:44.538 user 0m3.557s 00:06:44.538 sys 0m0.543s 00:06:44.538 15:02:31 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.538 15:02:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:44.539 ************************************ 00:06:44.539 END TEST event_scheduler 00:06:44.539 ************************************ 00:06:44.539 15:02:31 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:44.539 15:02:31 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:44.539 15:02:31 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.539 15:02:31 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.539 15:02:31 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.799 ************************************ 00:06:44.799 START TEST app_repeat 00:06:44.799 ************************************ 00:06:44.799 15:02:31 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:44.799 15:02:31 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.799 15:02:31 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.799 15:02:31 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:44.799 15:02:31 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.799 15:02:31 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:44.799 15:02:31 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:44.799 15:02:31 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:44.799 15:02:31 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3046449 00:06:44.799 15:02:31 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:44.799 15:02:31 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:44.799 15:02:31 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3046449' 00:06:44.799 Process app_repeat pid: 3046449 00:06:44.799 15:02:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:44.799 15:02:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:44.799 spdk_app_start Round 0 00:06:44.799 15:02:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3046449 /var/tmp/spdk-nbd.sock 00:06:44.799 15:02:31 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3046449 ']' 00:06:44.799 15:02:31 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:44.799 15:02:31 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.799 15:02:31 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:44.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:44.799 15:02:31 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.799 15:02:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:44.799 [2024-10-28 15:02:31.458346] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:06:44.799 [2024-10-28 15:02:31.458429] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3046449 ] 00:06:44.799 [2024-10-28 15:02:31.581851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:45.059 [2024-10-28 15:02:31.702684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.059 [2024-10-28 15:02:31.702713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.319 15:02:32 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.319 15:02:32 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:45.319 15:02:32 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.579 Malloc0 00:06:45.579 15:02:32 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.150 Malloc1 00:06:46.150 15:02:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.150 15:02:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.150 15:02:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.150 15:02:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:46.150 15:02:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.150 15:02:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:46.150 15:02:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.150 15:02:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.150 15:02:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.150 15:02:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:46.150 15:02:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.150 15:02:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:46.150 15:02:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:46.150 15:02:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:46.150 15:02:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.150 15:02:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:46.724 /dev/nbd0 00:06:46.724 15:02:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:46.724 15:02:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:46.724 15:02:33 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:46.724 15:02:33 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:46.724 15:02:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:46.724 15:02:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:46.724 15:02:33 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:46.724 15:02:33 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:46.724 15:02:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:46.724 15:02:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:46.724 15:02:33 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.724 1+0 records in 00:06:46.724 1+0 records out 00:06:46.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245035 s, 16.7 MB/s 00:06:46.724 15:02:33 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:46.724 15:02:33 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:46.724 15:02:33 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:46.724 15:02:33 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:46.724 15:02:33 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:46.724 15:02:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.724 15:02:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.724 15:02:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:47.294 /dev/nbd1 00:06:47.294 15:02:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:47.294 15:02:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:47.294 15:02:33 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:47.294 15:02:33 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:47.294 15:02:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:47.294 15:02:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:47.294 15:02:33 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:47.294 15:02:33 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:47.294 15:02:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:47.294 15:02:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:47.294 15:02:33 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:47.294 1+0 records in 00:06:47.294 1+0 records out 00:06:47.294 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355345 s, 11.5 MB/s 00:06:47.294 15:02:33 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:47.294 15:02:33 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:47.294 15:02:33 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:47.294 15:02:33 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:47.294 15:02:33 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:47.294 15:02:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.294 15:02:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:47.294 15:02:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.294 15:02:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.294 15:02:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.862 15:02:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:47.862 { 00:06:47.862 "nbd_device": "/dev/nbd0", 00:06:47.862 "bdev_name": "Malloc0" 00:06:47.862 }, 00:06:47.862 { 00:06:47.862 "nbd_device": "/dev/nbd1", 00:06:47.863 "bdev_name": "Malloc1" 00:06:47.863 } 00:06:47.863 ]' 00:06:47.863 15:02:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:47.863 { 00:06:47.863 "nbd_device": "/dev/nbd0", 00:06:47.863 "bdev_name": "Malloc0" 00:06:47.863 }, 00:06:47.863 { 00:06:47.863 "nbd_device": "/dev/nbd1", 00:06:47.863 "bdev_name": "Malloc1" 00:06:47.863 } 00:06:47.863 ]' 00:06:47.863 15:02:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.863 15:02:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:47.863 /dev/nbd1' 00:06:47.863 15:02:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:47.863 /dev/nbd1' 00:06:47.863 15:02:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.863 15:02:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:47.863 15:02:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:47.863 15:02:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:47.863 15:02:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:47.863 15:02:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:47.863 15:02:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.863 15:02:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.863 15:02:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:47.863 15:02:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:47.863 15:02:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:47.863 15:02:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:47.863 256+0 records in 00:06:47.863 256+0 records out 00:06:47.863 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00820752 s, 128 MB/s 00:06:47.863 15:02:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.863 15:02:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:48.123 256+0 records in 00:06:48.123 256+0 records out 00:06:48.123 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276606 s, 37.9 MB/s 00:06:48.123 15:02:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:48.123 15:02:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:48.123 256+0 records in 00:06:48.123 256+0 records out 00:06:48.123 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0317932 s, 33.0 MB/s 00:06:48.123 15:02:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:48.123 15:02:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.123 15:02:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:48.123 15:02:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:48.123 15:02:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:48.123 15:02:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:48.123 15:02:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:48.123 15:02:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:48.123 15:02:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:48.124 15:02:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:48.124 15:02:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:48.124 15:02:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:48.124 15:02:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:48.124 15:02:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.124 15:02:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.124 15:02:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:48.124 15:02:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:48.124 15:02:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.124 15:02:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:48.385 15:02:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:48.385 15:02:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:48.385 15:02:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:48.385 15:02:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.385 15:02:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.385 15:02:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:48.385 15:02:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:48.385 15:02:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.385 15:02:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.385 15:02:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:49.327 15:02:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:49.327 15:02:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:49.327 15:02:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:49.327 15:02:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.327 15:02:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.327 15:02:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:49.327 15:02:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:49.327 15:02:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.327 15:02:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:49.327 15:02:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.327 15:02:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.586 15:02:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:49.586 15:02:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:49.586 15:02:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.586 15:02:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:49.586 15:02:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:49.586 15:02:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:49.586 15:02:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:49.586 15:02:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:49.586 15:02:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:49.586 15:02:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:49.586 15:02:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:49.586 15:02:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:49.586 15:02:36 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:50.155 15:02:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:50.416 [2024-10-28 15:02:37.184825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:50.685 [2024-10-28 15:02:37.296583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.685 [2024-10-28 15:02:37.296584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.685 [2024-10-28 15:02:37.396497] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:50.685 [2024-10-28 15:02:37.396666] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:53.372 15:02:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:53.372 15:02:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:53.372 spdk_app_start Round 1 00:06:53.372 15:02:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3046449 /var/tmp/spdk-nbd.sock 00:06:53.372 15:02:39 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3046449 ']' 00:06:53.372 15:02:39 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:53.372 15:02:39 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.372 15:02:39 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:53.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:53.372 15:02:39 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.372 15:02:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:53.372 15:02:40 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.372 15:02:40 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:53.372 15:02:40 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:53.943 Malloc0 00:06:53.943 15:02:40 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:54.513 Malloc1 00:06:54.513 15:02:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:54.513 15:02:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.513 15:02:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:54.513 15:02:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:54.513 15:02:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.513 15:02:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:54.513 15:02:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:54.513 15:02:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.513 15:02:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:54.513 15:02:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:54.513 15:02:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.513 15:02:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:54.513 15:02:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:54.513 15:02:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:54.513 15:02:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.513 15:02:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:54.773 /dev/nbd0 00:06:54.773 15:02:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:54.773 15:02:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:54.773 15:02:41 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:54.773 15:02:41 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:54.773 15:02:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:54.773 15:02:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:54.773 15:02:41 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:54.773 15:02:41 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:54.773 15:02:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:54.773 15:02:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:54.773 15:02:41 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:54.773 1+0 records in 00:06:54.773 1+0 records out 00:06:54.773 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252869 s, 16.2 MB/s 00:06:54.773 15:02:41 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:54.773 15:02:41 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:54.773 15:02:41 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:54.773 15:02:41 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:54.773 15:02:41 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:54.773 15:02:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.773 15:02:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.773 15:02:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:55.711 /dev/nbd1 00:06:55.711 15:02:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:55.711 15:02:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:55.711 15:02:42 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:55.711 15:02:42 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:55.711 15:02:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:55.711 15:02:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:55.711 15:02:42 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:55.711 15:02:42 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:55.711 15:02:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:55.711 15:02:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:55.711 15:02:42 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:55.711 1+0 records in 00:06:55.711 1+0 records out 00:06:55.711 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412942 s, 9.9 MB/s 00:06:55.711 15:02:42 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:55.711 15:02:42 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:55.711 15:02:42 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:55.711 15:02:42 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:55.711 15:02:42 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:55.711 15:02:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.711 15:02:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:55.711 15:02:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:55.711 15:02:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.711 15:02:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:56.280 15:02:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:56.280 { 00:06:56.280 "nbd_device": "/dev/nbd0", 00:06:56.280 "bdev_name": "Malloc0" 00:06:56.280 }, 00:06:56.280 { 00:06:56.281 "nbd_device": "/dev/nbd1", 00:06:56.281 "bdev_name": "Malloc1" 00:06:56.281 } 00:06:56.281 ]' 00:06:56.281 15:02:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:56.281 { 00:06:56.281 "nbd_device": "/dev/nbd0", 00:06:56.281 "bdev_name": "Malloc0" 00:06:56.281 }, 00:06:56.281 { 00:06:56.281 "nbd_device": "/dev/nbd1", 00:06:56.281 "bdev_name": "Malloc1" 00:06:56.281 } 00:06:56.281 ]' 00:06:56.281 15:02:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:56.281 /dev/nbd1' 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:56.281 /dev/nbd1' 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:56.281 256+0 records in 00:06:56.281 256+0 records out 00:06:56.281 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0045857 s, 229 MB/s 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:56.281 256+0 records in 00:06:56.281 256+0 records out 00:06:56.281 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.033434 s, 31.4 MB/s 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:56.281 256+0 records in 00:06:56.281 256+0 records out 00:06:56.281 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0297452 s, 35.3 MB/s 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.281 15:02:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:56.851 15:02:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:56.851 15:02:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:56.851 15:02:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:56.851 15:02:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.851 15:02:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.851 15:02:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:56.851 15:02:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:56.851 15:02:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.851 15:02:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.851 15:02:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:57.422 15:02:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:57.422 15:02:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:57.422 15:02:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:57.422 15:02:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.422 15:02:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.422 15:02:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:57.422 15:02:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:57.422 15:02:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.422 15:02:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:57.422 15:02:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.422 15:02:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.682 15:02:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:57.682 15:02:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:57.682 15:02:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.942 15:02:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:57.942 15:02:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:57.942 15:02:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.942 15:02:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:57.942 15:02:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:57.942 15:02:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:57.942 15:02:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:57.942 15:02:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:57.942 15:02:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:57.942 15:02:44 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:58.202 15:02:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:58.772 [2024-10-28 15:02:45.341458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:58.772 [2024-10-28 15:02:45.455155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.772 [2024-10-28 15:02:45.455171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.772 [2024-10-28 15:02:45.556004] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:58.772 [2024-10-28 15:02:45.556139] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:01.313 15:02:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:01.313 15:02:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:01.313 spdk_app_start Round 2 00:07:01.313 15:02:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3046449 /var/tmp/spdk-nbd.sock 00:07:01.313 15:02:47 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3046449 ']' 00:07:01.313 15:02:47 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:01.313 15:02:47 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.313 15:02:47 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:01.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:01.313 15:02:47 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.313 15:02:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:01.884 15:02:48 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.884 15:02:48 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:01.884 15:02:48 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:02.455 Malloc0 00:07:02.455 15:02:49 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:03.023 Malloc1 00:07:03.280 15:02:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:03.280 15:02:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.280 15:02:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:03.280 15:02:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:03.280 15:02:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.280 15:02:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:03.280 15:02:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:03.280 15:02:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.280 15:02:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:03.280 15:02:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:03.280 15:02:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.280 15:02:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:03.280 15:02:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:03.280 15:02:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:03.280 15:02:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:03.281 15:02:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:03.849 /dev/nbd0 00:07:03.849 15:02:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:03.849 15:02:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:03.849 15:02:50 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:03.849 15:02:50 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:03.849 15:02:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:03.849 15:02:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:03.849 15:02:50 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:03.849 15:02:50 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:03.849 15:02:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:03.849 15:02:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:03.849 15:02:50 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:03.849 1+0 records in 00:07:03.849 1+0 records out 00:07:03.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333351 s, 12.3 MB/s 00:07:03.849 15:02:50 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:03.849 15:02:50 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:03.849 15:02:50 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:03.849 15:02:50 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:03.849 15:02:50 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:03.849 15:02:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.849 15:02:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:03.849 15:02:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:04.109 /dev/nbd1 00:07:04.109 15:02:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:04.109 15:02:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:04.109 15:02:50 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:04.109 15:02:50 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:04.109 15:02:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:04.109 15:02:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:04.109 15:02:50 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:04.109 15:02:50 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:04.109 15:02:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:04.109 15:02:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:04.109 15:02:50 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:04.109 1+0 records in 00:07:04.109 1+0 records out 00:07:04.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413473 s, 9.9 MB/s 00:07:04.109 15:02:50 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:04.369 15:02:50 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:04.369 15:02:50 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:04.369 15:02:50 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:04.369 15:02:50 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:04.369 15:02:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.369 15:02:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.369 15:02:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.369 15:02:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.369 15:02:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:04.939 { 00:07:04.939 "nbd_device": "/dev/nbd0", 00:07:04.939 "bdev_name": "Malloc0" 00:07:04.939 }, 00:07:04.939 { 00:07:04.939 "nbd_device": "/dev/nbd1", 00:07:04.939 "bdev_name": "Malloc1" 00:07:04.939 } 00:07:04.939 ]' 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:04.939 { 00:07:04.939 "nbd_device": "/dev/nbd0", 00:07:04.939 "bdev_name": "Malloc0" 00:07:04.939 }, 00:07:04.939 { 00:07:04.939 "nbd_device": "/dev/nbd1", 00:07:04.939 "bdev_name": "Malloc1" 00:07:04.939 } 00:07:04.939 ]' 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:04.939 /dev/nbd1' 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:04.939 /dev/nbd1' 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:04.939 256+0 records in 00:07:04.939 256+0 records out 00:07:04.939 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00806425 s, 130 MB/s 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:04.939 256+0 records in 00:07:04.939 256+0 records out 00:07:04.939 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0280005 s, 37.4 MB/s 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:04.939 256+0 records in 00:07:04.939 256+0 records out 00:07:04.939 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0423909 s, 24.7 MB/s 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.939 15:02:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:05.509 15:02:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:05.509 15:02:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:05.509 15:02:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:05.509 15:02:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.509 15:02:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.509 15:02:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:05.509 15:02:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:05.509 15:02:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.509 15:02:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.509 15:02:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:06.078 15:02:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:06.078 15:02:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:06.078 15:02:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:06.078 15:02:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.078 15:02:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.078 15:02:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:06.078 15:02:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:06.078 15:02:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.078 15:02:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:06.078 15:02:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.078 15:02:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.339 15:02:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:06.339 15:02:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:06.339 15:02:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.600 15:02:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:06.600 15:02:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:06.600 15:02:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.600 15:02:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:06.600 15:02:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:06.600 15:02:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:06.600 15:02:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:06.600 15:02:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:06.600 15:02:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:06.600 15:02:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:07.170 15:02:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:07.431 [2024-10-28 15:02:54.122381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:07.431 [2024-10-28 15:02:54.242566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.431 [2024-10-28 15:02:54.242567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.692 [2024-10-28 15:02:54.350286] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:07.692 [2024-10-28 15:02:54.350416] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:10.267 15:02:56 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3046449 /var/tmp/spdk-nbd.sock 00:07:10.267 15:02:56 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3046449 ']' 00:07:10.268 15:02:56 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:10.268 15:02:56 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.268 15:02:56 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:10.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:10.268 15:02:56 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.268 15:02:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:10.836 15:02:57 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.836 15:02:57 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:10.836 15:02:57 event.app_repeat -- event/event.sh@39 -- # killprocess 3046449 00:07:10.836 15:02:57 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 3046449 ']' 00:07:10.836 15:02:57 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 3046449 00:07:10.836 15:02:57 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:10.836 15:02:57 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.836 15:02:57 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3046449 00:07:10.836 15:02:57 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.836 15:02:57 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.836 15:02:57 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3046449' 00:07:10.836 killing process with pid 3046449 00:07:10.836 15:02:57 event.app_repeat -- common/autotest_common.sh@969 -- # kill 3046449 00:07:10.836 15:02:57 event.app_repeat -- common/autotest_common.sh@974 -- # wait 3046449 00:07:11.096 spdk_app_start is called in Round 0. 00:07:11.097 Shutdown signal received, stop current app iteration 00:07:11.097 Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 reinitialization... 00:07:11.097 spdk_app_start is called in Round 1. 00:07:11.097 Shutdown signal received, stop current app iteration 00:07:11.097 Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 reinitialization... 00:07:11.097 spdk_app_start is called in Round 2. 00:07:11.097 Shutdown signal received, stop current app iteration 00:07:11.097 Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 reinitialization... 00:07:11.097 spdk_app_start is called in Round 3. 00:07:11.097 Shutdown signal received, stop current app iteration 00:07:11.097 15:02:57 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:11.097 15:02:57 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:11.097 00:07:11.097 real 0m26.322s 00:07:11.097 user 1m0.505s 00:07:11.097 sys 0m5.680s 00:07:11.097 15:02:57 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.097 15:02:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:11.097 ************************************ 00:07:11.097 END TEST app_repeat 00:07:11.097 ************************************ 00:07:11.097 15:02:57 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:11.097 15:02:57 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:11.097 15:02:57 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.097 15:02:57 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.097 15:02:57 event -- common/autotest_common.sh@10 -- # set +x 00:07:11.097 ************************************ 00:07:11.097 START TEST cpu_locks 00:07:11.097 ************************************ 00:07:11.097 15:02:57 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:11.097 * Looking for test storage... 00:07:11.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:11.097 15:02:57 event.cpu_locks -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:11.097 15:02:57 event.cpu_locks -- common/autotest_common.sh@1689 -- # lcov --version 00:07:11.097 15:02:57 event.cpu_locks -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:11.357 15:02:58 event.cpu_locks -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:11.357 15:02:58 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.357 15:02:58 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.357 15:02:58 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.357 15:02:58 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.357 15:02:58 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.357 15:02:58 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.357 15:02:58 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.357 15:02:58 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.357 15:02:58 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.357 15:02:58 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.357 15:02:58 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.357 15:02:58 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:11.357 15:02:58 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:11.357 15:02:58 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.357 15:02:58 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.357 15:02:58 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:11.357 15:02:58 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:11.357 15:02:58 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.357 15:02:58 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:11.357 15:02:58 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.357 15:02:58 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:11.357 15:02:58 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:11.357 15:02:58 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.357 15:02:58 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:11.357 15:02:58 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.357 15:02:58 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.357 15:02:58 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.357 15:02:58 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:11.357 15:02:58 event.cpu_locks -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.357 15:02:58 event.cpu_locks -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:11.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.357 --rc genhtml_branch_coverage=1 00:07:11.357 --rc genhtml_function_coverage=1 00:07:11.357 --rc genhtml_legend=1 00:07:11.357 --rc geninfo_all_blocks=1 00:07:11.357 --rc geninfo_unexecuted_blocks=1 00:07:11.357 00:07:11.357 ' 00:07:11.357 15:02:58 event.cpu_locks -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:11.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.357 --rc genhtml_branch_coverage=1 00:07:11.357 --rc genhtml_function_coverage=1 00:07:11.357 --rc genhtml_legend=1 00:07:11.357 --rc geninfo_all_blocks=1 00:07:11.357 --rc geninfo_unexecuted_blocks=1 00:07:11.357 00:07:11.357 ' 00:07:11.357 15:02:58 event.cpu_locks -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:11.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.357 --rc genhtml_branch_coverage=1 00:07:11.357 --rc genhtml_function_coverage=1 00:07:11.357 --rc genhtml_legend=1 00:07:11.357 --rc geninfo_all_blocks=1 00:07:11.357 --rc geninfo_unexecuted_blocks=1 00:07:11.357 00:07:11.357 ' 00:07:11.357 15:02:58 event.cpu_locks -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:11.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.357 --rc genhtml_branch_coverage=1 00:07:11.357 --rc genhtml_function_coverage=1 00:07:11.357 --rc genhtml_legend=1 00:07:11.357 --rc geninfo_all_blocks=1 00:07:11.357 --rc geninfo_unexecuted_blocks=1 00:07:11.357 00:07:11.357 ' 00:07:11.357 15:02:58 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:11.357 15:02:58 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:11.357 15:02:58 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:11.357 15:02:58 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:11.357 15:02:58 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.357 15:02:58 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.357 15:02:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.357 ************************************ 00:07:11.357 START TEST default_locks 00:07:11.357 ************************************ 00:07:11.357 15:02:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:11.357 15:02:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3049725 00:07:11.357 15:02:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:11.357 15:02:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3049725 00:07:11.357 15:02:58 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3049725 ']' 00:07:11.357 15:02:58 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.357 15:02:58 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.357 15:02:58 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.357 15:02:58 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.357 15:02:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.618 [2024-10-28 15:02:58.269001] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:07:11.618 [2024-10-28 15:02:58.269194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3049725 ] 00:07:11.618 [2024-10-28 15:02:58.435467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.880 [2024-10-28 15:02:58.559934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.449 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.449 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:12.449 15:02:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3049725 00:07:12.449 15:02:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3049725 00:07:12.449 15:02:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:12.708 lslocks: write error 00:07:12.708 15:02:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3049725 00:07:12.708 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 3049725 ']' 00:07:12.708 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 3049725 00:07:12.708 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:12.708 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:12.708 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3049725 00:07:12.708 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:12.708 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:12.708 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3049725' 00:07:12.708 killing process with pid 3049725 00:07:12.708 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 3049725 00:07:12.708 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 3049725 00:07:13.277 15:02:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3049725 00:07:13.277 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:13.277 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3049725 00:07:13.277 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:13.277 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.277 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:13.277 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.277 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3049725 00:07:13.277 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3049725 ']' 00:07:13.277 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.277 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.277 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.277 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.277 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3049725) - No such process 00:07:13.277 ERROR: process (pid: 3049725) is no longer running 00:07:13.277 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.277 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:13.277 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:13.277 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:13.277 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:13.277 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:13.277 15:02:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:13.277 15:02:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:13.277 15:02:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:13.278 15:02:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:13.278 00:07:13.278 real 0m1.852s 00:07:13.278 user 0m1.934s 00:07:13.278 sys 0m0.846s 00:07:13.278 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.278 15:02:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.278 ************************************ 00:07:13.278 END TEST default_locks 00:07:13.278 ************************************ 00:07:13.278 15:03:00 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:13.278 15:03:00 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:13.278 15:03:00 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.278 15:03:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.278 ************************************ 00:07:13.278 START TEST default_locks_via_rpc 00:07:13.278 ************************************ 00:07:13.278 15:03:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:13.278 15:03:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3050027 00:07:13.278 15:03:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:13.278 15:03:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3050027 00:07:13.278 15:03:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3050027 ']' 00:07:13.278 15:03:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.278 15:03:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.278 15:03:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.278 15:03:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.278 15:03:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.278 [2024-10-28 15:03:00.113678] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:07:13.278 [2024-10-28 15:03:00.113795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3050027 ] 00:07:13.538 [2024-10-28 15:03:00.229524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.538 [2024-10-28 15:03:00.338144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.109 15:03:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.109 15:03:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:14.109 15:03:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:14.109 15:03:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.109 15:03:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.109 15:03:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.109 15:03:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:14.109 15:03:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:14.109 15:03:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:14.109 15:03:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:14.109 15:03:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:14.109 15:03:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.109 15:03:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.109 15:03:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.109 15:03:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3050027 00:07:14.109 15:03:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3050027 00:07:14.109 15:03:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:14.676 15:03:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3050027 00:07:14.676 15:03:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 3050027 ']' 00:07:14.676 15:03:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 3050027 00:07:14.676 15:03:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:14.676 15:03:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.676 15:03:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3050027 00:07:14.676 15:03:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.676 15:03:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.676 15:03:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3050027' 00:07:14.676 killing process with pid 3050027 00:07:14.676 15:03:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 3050027 00:07:14.676 15:03:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 3050027 00:07:15.243 00:07:15.243 real 0m1.996s 00:07:15.243 user 0m1.908s 00:07:15.243 sys 0m0.900s 00:07:15.243 15:03:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.243 15:03:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.243 ************************************ 00:07:15.243 END TEST default_locks_via_rpc 00:07:15.243 ************************************ 00:07:15.243 15:03:02 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:15.243 15:03:02 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:15.243 15:03:02 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.243 15:03:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.504 ************************************ 00:07:15.504 START TEST non_locking_app_on_locked_coremask 00:07:15.504 ************************************ 00:07:15.504 15:03:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:15.504 15:03:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3050392 00:07:15.504 15:03:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:15.504 15:03:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3050392 /var/tmp/spdk.sock 00:07:15.504 15:03:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3050392 ']' 00:07:15.505 15:03:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.505 15:03:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.505 15:03:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.505 15:03:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.505 15:03:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.505 [2024-10-28 15:03:02.244005] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:07:15.505 [2024-10-28 15:03:02.244188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3050392 ] 00:07:15.766 [2024-10-28 15:03:02.414978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.766 [2024-10-28 15:03:02.540865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.336 15:03:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.336 15:03:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:16.336 15:03:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3050546 00:07:16.336 15:03:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:16.336 15:03:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3050546 /var/tmp/spdk2.sock 00:07:16.336 15:03:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3050546 ']' 00:07:16.336 15:03:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:16.336 15:03:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.336 15:03:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:16.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:16.336 15:03:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.336 15:03:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.336 [2024-10-28 15:03:03.129671] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:07:16.336 [2024-10-28 15:03:03.129848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3050546 ] 00:07:16.596 [2024-10-28 15:03:03.385439] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:16.596 [2024-10-28 15:03:03.385514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.857 [2024-10-28 15:03:03.631747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.798 15:03:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.798 15:03:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:17.798 15:03:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3050392 00:07:17.798 15:03:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3050392 00:07:17.798 15:03:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:18.384 lslocks: write error 00:07:18.384 15:03:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3050392 00:07:18.384 15:03:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3050392 ']' 00:07:18.384 15:03:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3050392 00:07:18.384 15:03:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:18.384 15:03:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:18.384 15:03:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3050392 00:07:18.384 15:03:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:18.384 15:03:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:18.384 15:03:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3050392' 00:07:18.384 killing process with pid 3050392 00:07:18.384 15:03:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3050392 00:07:18.384 15:03:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3050392 00:07:19.766 15:03:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3050546 00:07:19.766 15:03:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3050546 ']' 00:07:19.766 15:03:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3050546 00:07:19.766 15:03:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:19.766 15:03:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.766 15:03:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3050546 00:07:19.766 15:03:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.766 15:03:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.766 15:03:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3050546' 00:07:19.766 killing process with pid 3050546 00:07:19.766 15:03:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3050546 00:07:19.766 15:03:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3050546 00:07:20.334 00:07:20.334 real 0m5.063s 00:07:20.334 user 0m5.434s 00:07:20.334 sys 0m1.645s 00:07:20.334 15:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.334 15:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.334 ************************************ 00:07:20.334 END TEST non_locking_app_on_locked_coremask 00:07:20.334 ************************************ 00:07:20.594 15:03:07 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:20.594 15:03:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.594 15:03:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.594 15:03:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.594 ************************************ 00:07:20.594 START TEST locking_app_on_unlocked_coremask 00:07:20.594 ************************************ 00:07:20.595 15:03:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:20.595 15:03:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3050984 00:07:20.595 15:03:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:20.595 15:03:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3050984 /var/tmp/spdk.sock 00:07:20.595 15:03:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3050984 ']' 00:07:20.595 15:03:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.595 15:03:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.595 15:03:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.595 15:03:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.595 15:03:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.595 [2024-10-28 15:03:07.305536] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:07:20.595 [2024-10-28 15:03:07.305656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3050984 ] 00:07:20.595 [2024-10-28 15:03:07.389712] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:20.595 [2024-10-28 15:03:07.389758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.855 [2024-10-28 15:03:07.501840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.426 15:03:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.426 15:03:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:21.426 15:03:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3051159 00:07:21.426 15:03:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:21.426 15:03:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3051159 /var/tmp/spdk2.sock 00:07:21.426 15:03:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3051159 ']' 00:07:21.426 15:03:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:21.426 15:03:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:21.426 15:03:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:21.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:21.426 15:03:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:21.426 15:03:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.426 [2024-10-28 15:03:08.112947] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:07:21.426 [2024-10-28 15:03:08.113135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3051159 ] 00:07:21.686 [2024-10-28 15:03:08.378852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.946 [2024-10-28 15:03:08.626894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.888 15:03:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.888 15:03:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:22.888 15:03:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3051159 00:07:22.888 15:03:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3051159 00:07:22.888 15:03:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:23.455 lslocks: write error 00:07:23.455 15:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3050984 00:07:23.455 15:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3050984 ']' 00:07:23.455 15:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3050984 00:07:23.455 15:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:23.455 15:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:23.455 15:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3050984 00:07:23.455 15:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:23.455 15:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:23.455 15:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3050984' 00:07:23.455 killing process with pid 3050984 00:07:23.455 15:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3050984 00:07:23.455 15:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3050984 00:07:24.834 15:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3051159 00:07:24.834 15:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3051159 ']' 00:07:24.834 15:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3051159 00:07:24.834 15:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:24.834 15:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:24.834 15:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3051159 00:07:24.834 15:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:24.834 15:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:24.834 15:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3051159' 00:07:24.834 killing process with pid 3051159 00:07:24.834 15:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3051159 00:07:24.834 15:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3051159 00:07:25.401 00:07:25.401 real 0m4.931s 00:07:25.401 user 0m5.313s 00:07:25.401 sys 0m1.715s 00:07:25.401 15:03:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.401 15:03:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.401 ************************************ 00:07:25.401 END TEST locking_app_on_unlocked_coremask 00:07:25.401 ************************************ 00:07:25.401 15:03:12 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:25.401 15:03:12 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:25.401 15:03:12 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.401 15:03:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:25.401 ************************************ 00:07:25.401 START TEST locking_app_on_locked_coremask 00:07:25.401 ************************************ 00:07:25.401 15:03:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:25.401 15:03:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3052150 00:07:25.401 15:03:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:25.401 15:03:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3052150 /var/tmp/spdk.sock 00:07:25.401 15:03:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3052150 ']' 00:07:25.401 15:03:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.401 15:03:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.401 15:03:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.401 15:03:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.401 15:03:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.661 [2024-10-28 15:03:12.383172] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:07:25.661 [2024-10-28 15:03:12.383352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3052150 ] 00:07:25.923 [2024-10-28 15:03:12.547629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.923 [2024-10-28 15:03:12.675679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.494 15:03:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.494 15:03:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:26.494 15:03:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3052197 00:07:26.494 15:03:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3052197 /var/tmp/spdk2.sock 00:07:26.494 15:03:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:26.494 15:03:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:26.494 15:03:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3052197 /var/tmp/spdk2.sock 00:07:26.494 15:03:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:26.494 15:03:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:26.494 15:03:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:26.494 15:03:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:26.494 15:03:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3052197 /var/tmp/spdk2.sock 00:07:26.494 15:03:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3052197 ']' 00:07:26.494 15:03:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:26.494 15:03:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:26.494 15:03:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:26.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:26.494 15:03:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:26.494 15:03:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:26.494 [2024-10-28 15:03:13.242873] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:07:26.494 [2024-10-28 15:03:13.242974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3052197 ] 00:07:26.753 [2024-10-28 15:03:13.385769] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3052150 has claimed it. 00:07:26.754 [2024-10-28 15:03:13.385847] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:27.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3052197) - No such process 00:07:27.324 ERROR: process (pid: 3052197) is no longer running 00:07:27.324 15:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.324 15:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:27.324 15:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:27.324 15:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:27.324 15:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:27.324 15:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:27.324 15:03:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3052150 00:07:27.324 15:03:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3052150 00:07:27.324 15:03:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:27.894 lslocks: write error 00:07:27.894 15:03:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3052150 00:07:27.894 15:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3052150 ']' 00:07:27.894 15:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3052150 00:07:27.894 15:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:27.894 15:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:27.894 15:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3052150 00:07:27.894 15:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:27.894 15:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:27.895 15:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3052150' 00:07:27.895 killing process with pid 3052150 00:07:27.895 15:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3052150 00:07:27.895 15:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3052150 00:07:28.465 00:07:28.465 real 0m2.990s 00:07:28.465 user 0m3.444s 00:07:28.465 sys 0m1.131s 00:07:28.465 15:03:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.465 15:03:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.465 ************************************ 00:07:28.465 END TEST locking_app_on_locked_coremask 00:07:28.465 ************************************ 00:07:28.465 15:03:15 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:28.465 15:03:15 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.465 15:03:15 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.465 15:03:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:28.465 ************************************ 00:07:28.465 START TEST locking_overlapped_coremask 00:07:28.465 ************************************ 00:07:28.465 15:03:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:28.465 15:03:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3052491 00:07:28.465 15:03:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:28.465 15:03:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3052491 /var/tmp/spdk.sock 00:07:28.465 15:03:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3052491 ']' 00:07:28.465 15:03:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.465 15:03:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:28.465 15:03:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.465 15:03:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:28.465 15:03:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.727 [2024-10-28 15:03:15.359678] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:07:28.727 [2024-10-28 15:03:15.359787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3052491 ] 00:07:28.727 [2024-10-28 15:03:15.491259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:28.987 [2024-10-28 15:03:15.622938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.987 [2024-10-28 15:03:15.623036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.987 [2024-10-28 15:03:15.623046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.247 15:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.247 15:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:29.247 15:03:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3052623 00:07:29.247 15:03:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:29.247 15:03:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3052623 /var/tmp/spdk2.sock 00:07:29.247 15:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:29.247 15:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3052623 /var/tmp/spdk2.sock 00:07:29.247 15:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:29.247 15:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.247 15:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:29.247 15:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.247 15:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3052623 /var/tmp/spdk2.sock 00:07:29.247 15:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3052623 ']' 00:07:29.247 15:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:29.247 15:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.247 15:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:29.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:29.247 15:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.247 15:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:29.506 [2024-10-28 15:03:16.124269] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:07:29.506 [2024-10-28 15:03:16.124382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3052623 ] 00:07:29.506 [2024-10-28 15:03:16.314854] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3052491 has claimed it. 00:07:29.506 [2024-10-28 15:03:16.314963] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:30.074 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3052623) - No such process 00:07:30.074 ERROR: process (pid: 3052623) is no longer running 00:07:30.074 15:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.074 15:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:30.074 15:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:30.074 15:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:30.075 15:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:30.075 15:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:30.075 15:03:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:30.075 15:03:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:30.075 15:03:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:30.075 15:03:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:30.075 15:03:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3052491 00:07:30.075 15:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 3052491 ']' 00:07:30.075 15:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 3052491 00:07:30.075 15:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:30.075 15:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:30.075 15:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3052491 00:07:30.334 15:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:30.334 15:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:30.334 15:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3052491' 00:07:30.334 killing process with pid 3052491 00:07:30.334 15:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 3052491 00:07:30.334 15:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 3052491 00:07:30.904 00:07:30.904 real 0m2.246s 00:07:30.904 user 0m6.022s 00:07:30.904 sys 0m0.678s 00:07:30.904 15:03:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.904 15:03:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.904 ************************************ 00:07:30.904 END TEST locking_overlapped_coremask 00:07:30.904 ************************************ 00:07:30.904 15:03:17 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:30.904 15:03:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:30.904 15:03:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.904 15:03:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.904 ************************************ 00:07:30.904 START TEST locking_overlapped_coremask_via_rpc 00:07:30.904 ************************************ 00:07:30.904 15:03:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:30.904 15:03:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3052788 00:07:30.904 15:03:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:30.904 15:03:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3052788 /var/tmp/spdk.sock 00:07:30.904 15:03:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3052788 ']' 00:07:30.904 15:03:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.904 15:03:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.904 15:03:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.904 15:03:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.904 15:03:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.904 [2024-10-28 15:03:17.741974] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:07:30.904 [2024-10-28 15:03:17.742159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3052788 ] 00:07:31.166 [2024-10-28 15:03:17.912480] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:31.166 [2024-10-28 15:03:17.912571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:31.426 [2024-10-28 15:03:18.045806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.426 [2024-10-28 15:03:18.045890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.426 [2024-10-28 15:03:18.045899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.686 15:03:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.686 15:03:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:31.686 15:03:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3052925 00:07:31.686 15:03:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:31.686 15:03:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3052925 /var/tmp/spdk2.sock 00:07:31.686 15:03:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3052925 ']' 00:07:31.686 15:03:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:31.686 15:03:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:31.686 15:03:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:31.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:31.686 15:03:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:31.686 15:03:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.946 [2024-10-28 15:03:18.572757] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:07:31.946 [2024-10-28 15:03:18.572869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3052925 ] 00:07:31.946 [2024-10-28 15:03:18.769254] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:31.946 [2024-10-28 15:03:18.769338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:32.205 [2024-10-28 15:03:18.988739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:32.205 [2024-10-28 15:03:18.992689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:32.205 [2024-10-28 15:03:18.992692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.771 15:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:32.771 15:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:32.771 15:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:32.771 15:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.771 15:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.771 15:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.771 15:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:32.771 15:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:32.772 15:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:32.772 15:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:32.772 15:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.772 15:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:32.772 15:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.772 15:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:32.772 15:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.772 15:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.772 [2024-10-28 15:03:19.603755] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3052788 has claimed it. 00:07:32.772 request: 00:07:32.772 { 00:07:32.772 "method": "framework_enable_cpumask_locks", 00:07:32.772 "req_id": 1 00:07:32.772 } 00:07:32.772 Got JSON-RPC error response 00:07:32.772 response: 00:07:32.772 { 00:07:32.772 "code": -32603, 00:07:32.772 "message": "Failed to claim CPU core: 2" 00:07:32.772 } 00:07:32.772 15:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:32.772 15:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:32.772 15:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:32.772 15:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:32.772 15:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:32.772 15:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3052788 /var/tmp/spdk.sock 00:07:32.772 15:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3052788 ']' 00:07:32.772 15:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.772 15:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.772 15:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.772 15:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.772 15:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.336 15:03:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:33.336 15:03:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:33.336 15:03:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3052925 /var/tmp/spdk2.sock 00:07:33.336 15:03:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3052925 ']' 00:07:33.336 15:03:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:33.336 15:03:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:33.336 15:03:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:33.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:33.336 15:03:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:33.336 15:03:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.593 15:03:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:33.593 15:03:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:33.593 15:03:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:33.593 15:03:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:33.593 15:03:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:33.593 15:03:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:33.593 00:07:33.593 real 0m2.823s 00:07:33.593 user 0m1.774s 00:07:33.593 sys 0m0.247s 00:07:33.593 15:03:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.593 15:03:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.593 ************************************ 00:07:33.593 END TEST locking_overlapped_coremask_via_rpc 00:07:33.593 ************************************ 00:07:33.850 15:03:20 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:33.850 15:03:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3052788 ]] 00:07:33.850 15:03:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3052788 00:07:33.850 15:03:20 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3052788 ']' 00:07:33.850 15:03:20 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3052788 00:07:33.850 15:03:20 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:33.850 15:03:20 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:33.850 15:03:20 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3052788 00:07:33.851 15:03:20 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:33.851 15:03:20 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:33.851 15:03:20 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3052788' 00:07:33.851 killing process with pid 3052788 00:07:33.851 15:03:20 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3052788 00:07:33.851 15:03:20 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3052788 00:07:34.108 15:03:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3052925 ]] 00:07:34.108 15:03:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3052925 00:07:34.108 15:03:20 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3052925 ']' 00:07:34.108 15:03:20 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3052925 00:07:34.108 15:03:20 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:34.108 15:03:20 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.108 15:03:20 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3052925 00:07:34.366 15:03:20 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:34.366 15:03:20 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:34.366 15:03:20 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3052925' 00:07:34.366 killing process with pid 3052925 00:07:34.366 15:03:20 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3052925 00:07:34.366 15:03:20 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3052925 00:07:34.624 15:03:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:34.625 15:03:21 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:34.625 15:03:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3052788 ]] 00:07:34.625 15:03:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3052788 00:07:34.625 15:03:21 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3052788 ']' 00:07:34.625 15:03:21 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3052788 00:07:34.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3052788) - No such process 00:07:34.625 15:03:21 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3052788 is not found' 00:07:34.625 Process with pid 3052788 is not found 00:07:34.625 15:03:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3052925 ]] 00:07:34.625 15:03:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3052925 00:07:34.625 15:03:21 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3052925 ']' 00:07:34.625 15:03:21 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3052925 00:07:34.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3052925) - No such process 00:07:34.625 15:03:21 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3052925 is not found' 00:07:34.625 Process with pid 3052925 is not found 00:07:34.625 15:03:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:34.625 00:07:34.625 real 0m23.670s 00:07:34.625 user 0m39.584s 00:07:34.625 sys 0m8.538s 00:07:34.625 15:03:21 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.625 15:03:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:34.625 ************************************ 00:07:34.625 END TEST cpu_locks 00:07:34.625 ************************************ 00:07:34.882 00:07:34.882 real 0m57.865s 00:07:34.882 user 1m50.832s 00:07:34.882 sys 0m15.568s 00:07:34.882 15:03:21 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.882 15:03:21 event -- common/autotest_common.sh@10 -- # set +x 00:07:34.882 ************************************ 00:07:34.882 END TEST event 00:07:34.882 ************************************ 00:07:34.882 15:03:21 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:34.882 15:03:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:34.882 15:03:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.882 15:03:21 -- common/autotest_common.sh@10 -- # set +x 00:07:34.882 ************************************ 00:07:34.882 START TEST thread 00:07:34.882 ************************************ 00:07:34.882 15:03:21 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:34.882 * Looking for test storage... 00:07:34.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:34.882 15:03:21 thread -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:34.882 15:03:21 thread -- common/autotest_common.sh@1689 -- # lcov --version 00:07:34.882 15:03:21 thread -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:35.142 15:03:21 thread -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:35.142 15:03:21 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.142 15:03:21 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.142 15:03:21 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.142 15:03:21 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.142 15:03:21 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.142 15:03:21 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.142 15:03:21 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.142 15:03:21 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.142 15:03:21 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.142 15:03:21 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.142 15:03:21 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.142 15:03:21 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:35.142 15:03:21 thread -- scripts/common.sh@345 -- # : 1 00:07:35.142 15:03:21 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.142 15:03:21 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.142 15:03:21 thread -- scripts/common.sh@365 -- # decimal 1 00:07:35.142 15:03:21 thread -- scripts/common.sh@353 -- # local d=1 00:07:35.142 15:03:21 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.142 15:03:21 thread -- scripts/common.sh@355 -- # echo 1 00:07:35.142 15:03:21 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.142 15:03:21 thread -- scripts/common.sh@366 -- # decimal 2 00:07:35.142 15:03:21 thread -- scripts/common.sh@353 -- # local d=2 00:07:35.142 15:03:21 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.142 15:03:21 thread -- scripts/common.sh@355 -- # echo 2 00:07:35.142 15:03:21 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.142 15:03:21 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.142 15:03:21 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.142 15:03:21 thread -- scripts/common.sh@368 -- # return 0 00:07:35.142 15:03:21 thread -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.142 15:03:21 thread -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:35.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.142 --rc genhtml_branch_coverage=1 00:07:35.142 --rc genhtml_function_coverage=1 00:07:35.142 --rc genhtml_legend=1 00:07:35.142 --rc geninfo_all_blocks=1 00:07:35.142 --rc geninfo_unexecuted_blocks=1 00:07:35.142 00:07:35.142 ' 00:07:35.142 15:03:21 thread -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:35.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.142 --rc genhtml_branch_coverage=1 00:07:35.142 --rc genhtml_function_coverage=1 00:07:35.142 --rc genhtml_legend=1 00:07:35.142 --rc geninfo_all_blocks=1 00:07:35.143 --rc geninfo_unexecuted_blocks=1 00:07:35.143 00:07:35.143 ' 00:07:35.143 15:03:21 thread -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:35.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.143 --rc genhtml_branch_coverage=1 00:07:35.143 --rc genhtml_function_coverage=1 00:07:35.143 --rc genhtml_legend=1 00:07:35.143 --rc geninfo_all_blocks=1 00:07:35.143 --rc geninfo_unexecuted_blocks=1 00:07:35.143 00:07:35.143 ' 00:07:35.143 15:03:21 thread -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:35.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.143 --rc genhtml_branch_coverage=1 00:07:35.143 --rc genhtml_function_coverage=1 00:07:35.143 --rc genhtml_legend=1 00:07:35.143 --rc geninfo_all_blocks=1 00:07:35.143 --rc geninfo_unexecuted_blocks=1 00:07:35.143 00:07:35.143 ' 00:07:35.143 15:03:21 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:35.143 15:03:21 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:35.143 15:03:21 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.143 15:03:21 thread -- common/autotest_common.sh@10 -- # set +x 00:07:35.143 ************************************ 00:07:35.143 START TEST thread_poller_perf 00:07:35.143 ************************************ 00:07:35.143 15:03:21 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:35.143 [2024-10-28 15:03:21.830479] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:07:35.143 [2024-10-28 15:03:21.830544] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3053415 ] 00:07:35.143 [2024-10-28 15:03:21.908160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.403 [2024-10-28 15:03:22.016725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.403 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:36.356 [2024-10-28T14:03:23.223Z] ====================================== 00:07:36.356 [2024-10-28T14:03:23.223Z] busy:2723212557 (cyc) 00:07:36.356 [2024-10-28T14:03:23.223Z] total_run_count: 133000 00:07:36.356 [2024-10-28T14:03:23.223Z] tsc_hz: 2700000000 (cyc) 00:07:36.356 [2024-10-28T14:03:23.223Z] ====================================== 00:07:36.356 [2024-10-28T14:03:23.223Z] poller_cost: 20475 (cyc), 7583 (nsec) 00:07:36.356 00:07:36.356 real 0m1.342s 00:07:36.356 user 0m1.247s 00:07:36.356 sys 0m0.086s 00:07:36.356 15:03:23 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.356 15:03:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:36.356 ************************************ 00:07:36.356 END TEST thread_poller_perf 00:07:36.356 ************************************ 00:07:36.356 15:03:23 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:36.356 15:03:23 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:36.356 15:03:23 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.356 15:03:23 thread -- common/autotest_common.sh@10 -- # set +x 00:07:36.617 ************************************ 00:07:36.617 START TEST thread_poller_perf 00:07:36.617 ************************************ 00:07:36.617 15:03:23 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:36.617 [2024-10-28 15:03:23.246952] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:07:36.617 [2024-10-28 15:03:23.247088] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3053583 ] 00:07:36.617 [2024-10-28 15:03:23.403968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.876 [2024-10-28 15:03:23.523675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.876 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:37.813 [2024-10-28T14:03:24.680Z] ====================================== 00:07:37.813 [2024-10-28T14:03:24.680Z] busy:2706562557 (cyc) 00:07:37.813 [2024-10-28T14:03:24.680Z] total_run_count: 1851000 00:07:37.813 [2024-10-28T14:03:24.680Z] tsc_hz: 2700000000 (cyc) 00:07:37.813 [2024-10-28T14:03:24.680Z] ====================================== 00:07:37.813 [2024-10-28T14:03:24.680Z] poller_cost: 1462 (cyc), 541 (nsec) 00:07:37.813 00:07:37.813 real 0m1.416s 00:07:37.813 user 0m1.260s 00:07:37.813 sys 0m0.143s 00:07:37.813 15:03:24 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.813 15:03:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:37.813 ************************************ 00:07:37.813 END TEST thread_poller_perf 00:07:37.813 ************************************ 00:07:37.813 15:03:24 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:37.813 00:07:37.813 real 0m3.099s 00:07:37.813 user 0m2.707s 00:07:37.813 sys 0m0.382s 00:07:37.813 15:03:24 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.813 15:03:24 thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.813 ************************************ 00:07:37.813 END TEST thread 00:07:37.813 ************************************ 00:07:38.073 15:03:24 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:38.073 15:03:24 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:38.073 15:03:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:38.073 15:03:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.073 15:03:24 -- common/autotest_common.sh@10 -- # set +x 00:07:38.073 ************************************ 00:07:38.073 START TEST app_cmdline 00:07:38.073 ************************************ 00:07:38.073 15:03:24 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:38.073 * Looking for test storage... 00:07:38.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:38.073 15:03:24 app_cmdline -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:38.073 15:03:24 app_cmdline -- common/autotest_common.sh@1689 -- # lcov --version 00:07:38.073 15:03:24 app_cmdline -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:38.073 15:03:24 app_cmdline -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:38.073 15:03:24 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.073 15:03:24 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.073 15:03:24 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.073 15:03:24 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.073 15:03:24 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.073 15:03:24 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.073 15:03:24 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.073 15:03:24 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.073 15:03:24 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.073 15:03:24 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.073 15:03:24 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.073 15:03:24 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:38.073 15:03:24 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:38.073 15:03:24 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.073 15:03:24 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.073 15:03:24 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:38.073 15:03:24 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:38.073 15:03:24 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.073 15:03:24 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:38.073 15:03:24 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.073 15:03:24 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:38.073 15:03:24 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:38.073 15:03:24 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.073 15:03:24 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:38.073 15:03:24 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.073 15:03:24 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.073 15:03:24 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.073 15:03:24 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:38.073 15:03:24 app_cmdline -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.073 15:03:24 app_cmdline -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:38.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.073 --rc genhtml_branch_coverage=1 00:07:38.073 --rc genhtml_function_coverage=1 00:07:38.073 --rc genhtml_legend=1 00:07:38.073 --rc geninfo_all_blocks=1 00:07:38.073 --rc geninfo_unexecuted_blocks=1 00:07:38.073 00:07:38.073 ' 00:07:38.073 15:03:24 app_cmdline -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:38.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.073 --rc genhtml_branch_coverage=1 00:07:38.073 --rc genhtml_function_coverage=1 00:07:38.073 --rc genhtml_legend=1 00:07:38.073 --rc geninfo_all_blocks=1 00:07:38.073 --rc geninfo_unexecuted_blocks=1 00:07:38.073 00:07:38.073 ' 00:07:38.073 15:03:24 app_cmdline -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:38.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.073 --rc genhtml_branch_coverage=1 00:07:38.073 --rc genhtml_function_coverage=1 00:07:38.073 --rc genhtml_legend=1 00:07:38.073 --rc geninfo_all_blocks=1 00:07:38.073 --rc geninfo_unexecuted_blocks=1 00:07:38.073 00:07:38.073 ' 00:07:38.073 15:03:24 app_cmdline -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:38.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.073 --rc genhtml_branch_coverage=1 00:07:38.073 --rc genhtml_function_coverage=1 00:07:38.073 --rc genhtml_legend=1 00:07:38.073 --rc geninfo_all_blocks=1 00:07:38.073 --rc geninfo_unexecuted_blocks=1 00:07:38.073 00:07:38.073 ' 00:07:38.073 15:03:24 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:38.073 15:03:24 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3053784 00:07:38.073 15:03:24 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:38.073 15:03:24 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3053784 00:07:38.073 15:03:24 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 3053784 ']' 00:07:38.073 15:03:24 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.073 15:03:24 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:38.073 15:03:24 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.073 15:03:24 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:38.073 15:03:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:38.334 [2024-10-28 15:03:25.031452] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:07:38.334 [2024-10-28 15:03:25.031630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3053784 ] 00:07:38.334 [2024-10-28 15:03:25.163706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.595 [2024-10-28 15:03:25.283587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.168 15:03:25 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.168 15:03:25 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:39.168 15:03:25 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:39.739 { 00:07:39.739 "version": "SPDK v25.01-pre git sha1 45379ed84", 00:07:39.739 "fields": { 00:07:39.739 "major": 25, 00:07:39.739 "minor": 1, 00:07:39.739 "patch": 0, 00:07:39.739 "suffix": "-pre", 00:07:39.739 "commit": "45379ed84" 00:07:39.739 } 00:07:39.739 } 00:07:39.739 15:03:26 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:39.739 15:03:26 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:39.739 15:03:26 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:39.739 15:03:26 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:39.739 15:03:26 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:39.739 15:03:26 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:39.739 15:03:26 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.739 15:03:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:39.739 15:03:26 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:39.739 15:03:26 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.739 15:03:26 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:39.739 15:03:26 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:39.739 15:03:26 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:39.739 15:03:26 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:39.739 15:03:26 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:39.739 15:03:26 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.739 15:03:26 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.739 15:03:26 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.739 15:03:26 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.739 15:03:26 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.739 15:03:26 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.739 15:03:26 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.739 15:03:26 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:39.739 15:03:26 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:40.679 request: 00:07:40.679 { 00:07:40.679 "method": "env_dpdk_get_mem_stats", 00:07:40.679 "req_id": 1 00:07:40.679 } 00:07:40.679 Got JSON-RPC error response 00:07:40.679 response: 00:07:40.679 { 00:07:40.679 "code": -32601, 00:07:40.679 "message": "Method not found" 00:07:40.679 } 00:07:40.679 15:03:27 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:40.679 15:03:27 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:40.679 15:03:27 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:40.679 15:03:27 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:40.679 15:03:27 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3053784 00:07:40.679 15:03:27 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 3053784 ']' 00:07:40.679 15:03:27 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 3053784 00:07:40.679 15:03:27 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:40.679 15:03:27 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:40.679 15:03:27 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3053784 00:07:40.679 15:03:27 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:40.679 15:03:27 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:40.679 15:03:27 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3053784' 00:07:40.679 killing process with pid 3053784 00:07:40.679 15:03:27 app_cmdline -- common/autotest_common.sh@969 -- # kill 3053784 00:07:40.679 15:03:27 app_cmdline -- common/autotest_common.sh@974 -- # wait 3053784 00:07:41.248 00:07:41.248 real 0m3.095s 00:07:41.248 user 0m4.210s 00:07:41.248 sys 0m0.826s 00:07:41.248 15:03:27 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.248 15:03:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:41.248 ************************************ 00:07:41.248 END TEST app_cmdline 00:07:41.248 ************************************ 00:07:41.248 15:03:27 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:41.248 15:03:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:41.248 15:03:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.248 15:03:27 -- common/autotest_common.sh@10 -- # set +x 00:07:41.248 ************************************ 00:07:41.248 START TEST version 00:07:41.248 ************************************ 00:07:41.248 15:03:27 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:41.248 * Looking for test storage... 00:07:41.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:41.248 15:03:27 version -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:41.248 15:03:27 version -- common/autotest_common.sh@1689 -- # lcov --version 00:07:41.248 15:03:27 version -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:41.248 15:03:28 version -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:41.248 15:03:28 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:41.248 15:03:28 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:41.248 15:03:28 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:41.248 15:03:28 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.248 15:03:28 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:41.248 15:03:28 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:41.248 15:03:28 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:41.248 15:03:28 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:41.248 15:03:28 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:41.248 15:03:28 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:41.248 15:03:28 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:41.248 15:03:28 version -- scripts/common.sh@344 -- # case "$op" in 00:07:41.248 15:03:28 version -- scripts/common.sh@345 -- # : 1 00:07:41.248 15:03:28 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:41.248 15:03:28 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.248 15:03:28 version -- scripts/common.sh@365 -- # decimal 1 00:07:41.248 15:03:28 version -- scripts/common.sh@353 -- # local d=1 00:07:41.248 15:03:28 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.248 15:03:28 version -- scripts/common.sh@355 -- # echo 1 00:07:41.248 15:03:28 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:41.248 15:03:28 version -- scripts/common.sh@366 -- # decimal 2 00:07:41.248 15:03:28 version -- scripts/common.sh@353 -- # local d=2 00:07:41.248 15:03:28 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.248 15:03:28 version -- scripts/common.sh@355 -- # echo 2 00:07:41.248 15:03:28 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.248 15:03:28 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.248 15:03:28 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.248 15:03:28 version -- scripts/common.sh@368 -- # return 0 00:07:41.248 15:03:28 version -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.248 15:03:28 version -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:41.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.248 --rc genhtml_branch_coverage=1 00:07:41.249 --rc genhtml_function_coverage=1 00:07:41.249 --rc genhtml_legend=1 00:07:41.249 --rc geninfo_all_blocks=1 00:07:41.249 --rc geninfo_unexecuted_blocks=1 00:07:41.249 00:07:41.249 ' 00:07:41.249 15:03:28 version -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:41.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.249 --rc genhtml_branch_coverage=1 00:07:41.249 --rc genhtml_function_coverage=1 00:07:41.249 --rc genhtml_legend=1 00:07:41.249 --rc geninfo_all_blocks=1 00:07:41.249 --rc geninfo_unexecuted_blocks=1 00:07:41.249 00:07:41.249 ' 00:07:41.249 15:03:28 version -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:41.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.249 --rc genhtml_branch_coverage=1 00:07:41.249 --rc genhtml_function_coverage=1 00:07:41.249 --rc genhtml_legend=1 00:07:41.249 --rc geninfo_all_blocks=1 00:07:41.249 --rc geninfo_unexecuted_blocks=1 00:07:41.249 00:07:41.249 ' 00:07:41.249 15:03:28 version -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:41.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.249 --rc genhtml_branch_coverage=1 00:07:41.249 --rc genhtml_function_coverage=1 00:07:41.249 --rc genhtml_legend=1 00:07:41.249 --rc geninfo_all_blocks=1 00:07:41.249 --rc geninfo_unexecuted_blocks=1 00:07:41.249 00:07:41.249 ' 00:07:41.249 15:03:28 version -- app/version.sh@17 -- # get_header_version major 00:07:41.249 15:03:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:41.249 15:03:28 version -- app/version.sh@14 -- # cut -f2 00:07:41.249 15:03:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:41.249 15:03:28 version -- app/version.sh@17 -- # major=25 00:07:41.249 15:03:28 version -- app/version.sh@18 -- # get_header_version minor 00:07:41.249 15:03:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:41.249 15:03:28 version -- app/version.sh@14 -- # cut -f2 00:07:41.249 15:03:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:41.249 15:03:28 version -- app/version.sh@18 -- # minor=1 00:07:41.249 15:03:28 version -- app/version.sh@19 -- # get_header_version patch 00:07:41.249 15:03:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:41.249 15:03:28 version -- app/version.sh@14 -- # cut -f2 00:07:41.249 15:03:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:41.249 15:03:28 version -- app/version.sh@19 -- # patch=0 00:07:41.249 15:03:28 version -- app/version.sh@20 -- # get_header_version suffix 00:07:41.249 15:03:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:41.249 15:03:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:41.249 15:03:28 version -- app/version.sh@14 -- # cut -f2 00:07:41.249 15:03:28 version -- app/version.sh@20 -- # suffix=-pre 00:07:41.249 15:03:28 version -- app/version.sh@22 -- # version=25.1 00:07:41.249 15:03:28 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:41.249 15:03:28 version -- app/version.sh@28 -- # version=25.1rc0 00:07:41.249 15:03:28 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:41.249 15:03:28 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:41.509 15:03:28 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:41.509 15:03:28 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:41.509 00:07:41.509 real 0m0.233s 00:07:41.510 user 0m0.153s 00:07:41.510 sys 0m0.114s 00:07:41.510 15:03:28 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.510 15:03:28 version -- common/autotest_common.sh@10 -- # set +x 00:07:41.510 ************************************ 00:07:41.510 END TEST version 00:07:41.510 ************************************ 00:07:41.510 15:03:28 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:41.510 15:03:28 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:41.510 15:03:28 -- spdk/autotest.sh@194 -- # uname -s 00:07:41.510 15:03:28 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:41.510 15:03:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:41.510 15:03:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:41.510 15:03:28 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:41.510 15:03:28 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:41.510 15:03:28 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:41.510 15:03:28 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:41.510 15:03:28 -- common/autotest_common.sh@10 -- # set +x 00:07:41.510 15:03:28 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:41.510 15:03:28 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:07:41.510 15:03:28 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:07:41.510 15:03:28 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:07:41.510 15:03:28 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:07:41.510 15:03:28 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:07:41.510 15:03:28 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:41.510 15:03:28 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:41.510 15:03:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.510 15:03:28 -- common/autotest_common.sh@10 -- # set +x 00:07:41.510 ************************************ 00:07:41.510 START TEST nvmf_tcp 00:07:41.510 ************************************ 00:07:41.510 15:03:28 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:41.510 * Looking for test storage... 00:07:41.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:41.510 15:03:28 nvmf_tcp -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:41.510 15:03:28 nvmf_tcp -- common/autotest_common.sh@1689 -- # lcov --version 00:07:41.510 15:03:28 nvmf_tcp -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:41.771 15:03:28 nvmf_tcp -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:41.771 15:03:28 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:41.771 15:03:28 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:41.771 15:03:28 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:41.771 15:03:28 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.771 15:03:28 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:41.771 15:03:28 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:41.771 15:03:28 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:41.771 15:03:28 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:41.771 15:03:28 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:41.771 15:03:28 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:41.771 15:03:28 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:41.771 15:03:28 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:41.771 15:03:28 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:41.771 15:03:28 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:41.771 15:03:28 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.771 15:03:28 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:41.771 15:03:28 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:41.771 15:03:28 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.771 15:03:28 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:41.771 15:03:28 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:41.771 15:03:28 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:41.771 15:03:28 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:41.771 15:03:28 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.771 15:03:28 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:41.771 15:03:28 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.771 15:03:28 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.771 15:03:28 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.771 15:03:28 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:41.771 15:03:28 nvmf_tcp -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.771 15:03:28 nvmf_tcp -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:41.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.771 --rc genhtml_branch_coverage=1 00:07:41.771 --rc genhtml_function_coverage=1 00:07:41.771 --rc genhtml_legend=1 00:07:41.771 --rc geninfo_all_blocks=1 00:07:41.771 --rc geninfo_unexecuted_blocks=1 00:07:41.771 00:07:41.771 ' 00:07:41.771 15:03:28 nvmf_tcp -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:41.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.771 --rc genhtml_branch_coverage=1 00:07:41.771 --rc genhtml_function_coverage=1 00:07:41.771 --rc genhtml_legend=1 00:07:41.771 --rc geninfo_all_blocks=1 00:07:41.771 --rc geninfo_unexecuted_blocks=1 00:07:41.771 00:07:41.771 ' 00:07:41.771 15:03:28 nvmf_tcp -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:41.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.771 --rc genhtml_branch_coverage=1 00:07:41.771 --rc genhtml_function_coverage=1 00:07:41.771 --rc genhtml_legend=1 00:07:41.771 --rc geninfo_all_blocks=1 00:07:41.771 --rc geninfo_unexecuted_blocks=1 00:07:41.771 00:07:41.771 ' 00:07:41.771 15:03:28 nvmf_tcp -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:41.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.771 --rc genhtml_branch_coverage=1 00:07:41.771 --rc genhtml_function_coverage=1 00:07:41.771 --rc genhtml_legend=1 00:07:41.771 --rc geninfo_all_blocks=1 00:07:41.771 --rc geninfo_unexecuted_blocks=1 00:07:41.771 00:07:41.771 ' 00:07:41.771 15:03:28 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:41.771 15:03:28 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:41.771 15:03:28 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:41.771 15:03:28 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:41.771 15:03:28 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.771 15:03:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:41.771 ************************************ 00:07:41.771 START TEST nvmf_target_core 00:07:41.771 ************************************ 00:07:41.771 15:03:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:41.771 * Looking for test storage... 00:07:41.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:41.771 15:03:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:41.771 15:03:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1689 -- # lcov --version 00:07:41.771 15:03:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:42.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.055 --rc genhtml_branch_coverage=1 00:07:42.055 --rc genhtml_function_coverage=1 00:07:42.055 --rc genhtml_legend=1 00:07:42.055 --rc geninfo_all_blocks=1 00:07:42.055 --rc geninfo_unexecuted_blocks=1 00:07:42.055 00:07:42.055 ' 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:42.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.055 --rc genhtml_branch_coverage=1 00:07:42.055 --rc genhtml_function_coverage=1 00:07:42.055 --rc genhtml_legend=1 00:07:42.055 --rc geninfo_all_blocks=1 00:07:42.055 --rc geninfo_unexecuted_blocks=1 00:07:42.055 00:07:42.055 ' 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:42.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.055 --rc genhtml_branch_coverage=1 00:07:42.055 --rc genhtml_function_coverage=1 00:07:42.055 --rc genhtml_legend=1 00:07:42.055 --rc geninfo_all_blocks=1 00:07:42.055 --rc geninfo_unexecuted_blocks=1 00:07:42.055 00:07:42.055 ' 00:07:42.055 15:03:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:42.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.055 --rc genhtml_branch_coverage=1 00:07:42.055 --rc genhtml_function_coverage=1 00:07:42.055 --rc genhtml_legend=1 00:07:42.055 --rc geninfo_all_blocks=1 00:07:42.055 --rc geninfo_unexecuted_blocks=1 00:07:42.056 00:07:42.056 ' 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:42.056 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:42.056 ************************************ 00:07:42.056 START TEST nvmf_abort 00:07:42.056 ************************************ 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:42.056 * Looking for test storage... 00:07:42.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1689 -- # lcov --version 00:07:42.056 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:42.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.317 --rc genhtml_branch_coverage=1 00:07:42.317 --rc genhtml_function_coverage=1 00:07:42.317 --rc genhtml_legend=1 00:07:42.317 --rc geninfo_all_blocks=1 00:07:42.317 --rc geninfo_unexecuted_blocks=1 00:07:42.317 00:07:42.317 ' 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:42.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.317 --rc genhtml_branch_coverage=1 00:07:42.317 --rc genhtml_function_coverage=1 00:07:42.317 --rc genhtml_legend=1 00:07:42.317 --rc geninfo_all_blocks=1 00:07:42.317 --rc geninfo_unexecuted_blocks=1 00:07:42.317 00:07:42.317 ' 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:42.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.317 --rc genhtml_branch_coverage=1 00:07:42.317 --rc genhtml_function_coverage=1 00:07:42.317 --rc genhtml_legend=1 00:07:42.317 --rc geninfo_all_blocks=1 00:07:42.317 --rc geninfo_unexecuted_blocks=1 00:07:42.317 00:07:42.317 ' 00:07:42.317 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:42.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.317 --rc genhtml_branch_coverage=1 00:07:42.317 --rc genhtml_function_coverage=1 00:07:42.317 --rc genhtml_legend=1 00:07:42.317 --rc geninfo_all_blocks=1 00:07:42.317 --rc geninfo_unexecuted_blocks=1 00:07:42.317 00:07:42.317 ' 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:42.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.318 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.318 15:03:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.318 15:03:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:42.318 15:03:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:42.318 15:03:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:42.318 15:03:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:45.668 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:45.668 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.668 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:45.669 Found net devices under 0000:84:00.0: cvl_0_0 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:45.669 Found net devices under 0000:84:00.1: cvl_0_1 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:45.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:45.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:07:45.669 00:07:45.669 --- 10.0.0.2 ping statistics --- 00:07:45.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.669 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:45.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:45.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:07:45.669 00:07:45.669 --- 10.0.0.1 ping statistics --- 00:07:45.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.669 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:45.669 15:03:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:45.669 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:45.669 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:45.669 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:45.669 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:45.669 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3056167 00:07:45.669 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:45.669 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3056167 00:07:45.669 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 3056167 ']' 00:07:45.669 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.669 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.669 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.669 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.669 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:45.669 [2024-10-28 15:03:32.133909] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:07:45.669 [2024-10-28 15:03:32.134112] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.669 [2024-10-28 15:03:32.316033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:45.669 [2024-10-28 15:03:32.427407] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:45.669 [2024-10-28 15:03:32.427459] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:45.669 [2024-10-28 15:03:32.427476] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:45.669 [2024-10-28 15:03:32.427490] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:45.669 [2024-10-28 15:03:32.427502] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:45.669 [2024-10-28 15:03:32.429238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.669 [2024-10-28 15:03:32.431675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.669 [2024-10-28 15:03:32.431682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:45.981 [2024-10-28 15:03:32.602384] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:45.981 Malloc0 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:45.981 Delay0 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:45.981 [2024-10-28 15:03:32.683775] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.981 15:03:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:46.239 [2024-10-28 15:03:32.840813] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:48.768 Initializing NVMe Controllers 00:07:48.768 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:48.768 controller IO queue size 128 less than required 00:07:48.768 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:48.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:48.768 Initialization complete. Launching workers. 00:07:48.768 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28396 00:07:48.768 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28457, failed to submit 62 00:07:48.768 success 28400, unsuccessful 57, failed 0 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:48.768 rmmod nvme_tcp 00:07:48.768 rmmod nvme_fabrics 00:07:48.768 rmmod nvme_keyring 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3056167 ']' 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3056167 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 3056167 ']' 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 3056167 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3056167 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3056167' 00:07:48.768 killing process with pid 3056167 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 3056167 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 3056167 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:48.768 15:03:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.676 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:50.676 00:07:50.676 real 0m8.723s 00:07:50.676 user 0m11.912s 00:07:50.676 sys 0m3.504s 00:07:50.676 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.676 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:50.676 ************************************ 00:07:50.676 END TEST nvmf_abort 00:07:50.676 ************************************ 00:07:50.676 15:03:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:50.676 15:03:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:50.676 15:03:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.676 15:03:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:50.935 ************************************ 00:07:50.935 START TEST nvmf_ns_hotplug_stress 00:07:50.935 ************************************ 00:07:50.935 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:50.935 * Looking for test storage... 00:07:50.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:50.935 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:50.935 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1689 -- # lcov --version 00:07:50.935 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:51.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.195 --rc genhtml_branch_coverage=1 00:07:51.195 --rc genhtml_function_coverage=1 00:07:51.195 --rc genhtml_legend=1 00:07:51.195 --rc geninfo_all_blocks=1 00:07:51.195 --rc geninfo_unexecuted_blocks=1 00:07:51.195 00:07:51.195 ' 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:51.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.195 --rc genhtml_branch_coverage=1 00:07:51.195 --rc genhtml_function_coverage=1 00:07:51.195 --rc genhtml_legend=1 00:07:51.195 --rc geninfo_all_blocks=1 00:07:51.195 --rc geninfo_unexecuted_blocks=1 00:07:51.195 00:07:51.195 ' 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:51.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.195 --rc genhtml_branch_coverage=1 00:07:51.195 --rc genhtml_function_coverage=1 00:07:51.195 --rc genhtml_legend=1 00:07:51.195 --rc geninfo_all_blocks=1 00:07:51.195 --rc geninfo_unexecuted_blocks=1 00:07:51.195 00:07:51.195 ' 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:51.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.195 --rc genhtml_branch_coverage=1 00:07:51.195 --rc genhtml_function_coverage=1 00:07:51.195 --rc genhtml_legend=1 00:07:51.195 --rc geninfo_all_blocks=1 00:07:51.195 --rc geninfo_unexecuted_blocks=1 00:07:51.195 00:07:51.195 ' 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.195 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.196 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.196 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:51.196 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.196 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:51.196 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:51.196 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:51.196 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.196 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.196 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.196 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:51.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:51.196 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:51.196 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:51.196 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:51.196 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:51.196 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:51.196 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:51.196 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.196 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:51.196 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:51.196 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:51.196 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.196 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.196 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.196 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:51.196 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:51.196 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:51.196 15:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:53.731 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:53.731 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:53.731 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:53.731 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:53.731 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:53.731 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:53.731 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:53.731 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:53.731 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:53.731 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:53.731 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:53.731 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:53.731 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:53.993 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:53.993 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:53.993 Found net devices under 0000:84:00.0: cvl_0_0 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:53.993 Found net devices under 0000:84:00.1: cvl_0_1 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:53.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:07:53.993 00:07:53.993 --- 10.0.0.2 ping statistics --- 00:07:53.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.993 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:53.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:07:53.993 00:07:53.993 --- 10.0.0.1 ping statistics --- 00:07:53.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.993 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3058666 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3058666 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 3058666 ']' 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:53.993 15:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:54.254 [2024-10-28 15:03:40.894454] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:07:54.254 [2024-10-28 15:03:40.894625] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.254 [2024-10-28 15:03:41.065611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:54.513 [2024-10-28 15:03:41.191297] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:54.513 [2024-10-28 15:03:41.191403] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:54.513 [2024-10-28 15:03:41.191439] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:54.513 [2024-10-28 15:03:41.191470] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:54.513 [2024-10-28 15:03:41.191498] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:54.513 [2024-10-28 15:03:41.194846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.513 [2024-10-28 15:03:41.194953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.513 [2024-10-28 15:03:41.194957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.513 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:54.513 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:07:54.513 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:54.513 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:54.513 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:54.513 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:54.513 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:54.513 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:55.079 [2024-10-28 15:03:41.657563] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.079 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:55.644 15:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:56.209 [2024-10-28 15:03:42.770002] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:56.209 15:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:56.773 15:03:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:57.031 Malloc0 00:07:57.031 15:03:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:57.288 Delay0 00:07:57.288 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.850 15:03:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:58.413 NULL1 00:07:58.671 15:03:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:59.238 15:03:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3059275 00:07:59.238 15:03:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:59.238 15:03:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3059275 00:07:59.238 15:03:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.611 Read completed with error (sct=0, sc=11) 00:08:00.611 15:03:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.868 15:03:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:00.868 15:03:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:01.434 true 00:08:01.434 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3059275 00:08:01.434 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.999 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.568 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.568 15:03:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:02.568 15:03:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:02.825 true 00:08:02.825 15:03:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3059275 00:08:02.825 15:03:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.390 15:03:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.390 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.390 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.902 15:03:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:03.902 15:03:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:04.467 true 00:08:04.725 15:03:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3059275 00:08:04.725 15:03:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.658 15:03:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.658 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.223 15:03:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:06.223 15:03:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:06.480 true 00:08:06.480 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3059275 00:08:06.480 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.853 15:03:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.380 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.380 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.380 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.380 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.380 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:08.380 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:08.946 true 00:08:08.946 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3059275 00:08:08.946 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.592 15:03:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.849 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.850 15:03:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:09.850 15:03:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:10.415 true 00:08:10.415 15:03:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3059275 00:08:10.415 15:03:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.981 15:03:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.546 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:11.546 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:12.111 true 00:08:12.111 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3059275 00:08:12.111 15:03:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.044 15:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.044 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.044 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.044 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.558 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.558 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.558 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:13.558 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:14.123 true 00:08:14.123 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3059275 00:08:14.123 15:04:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.693 15:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.950 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.950 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.950 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.950 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.950 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.950 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.950 15:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:14.950 15:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:15.515 true 00:08:15.515 15:04:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3059275 00:08:15.515 15:04:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.079 15:04:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:16.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:16.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:16.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:16.643 15:04:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:16.643 15:04:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:16.899 true 00:08:16.899 15:04:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3059275 00:08:16.899 15:04:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.269 15:04:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.834 15:04:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:18.834 15:04:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:19.397 true 00:08:19.397 15:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3059275 00:08:19.397 15:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.329 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.843 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.843 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.843 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.843 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.843 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:20.843 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:21.408 true 00:08:21.408 15:04:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3059275 00:08:21.408 15:04:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:21.973 15:04:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:21.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:21.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:21.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:21.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.253 15:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:22.253 15:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:22.818 true 00:08:22.818 15:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3059275 00:08:22.818 15:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.382 15:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.896 15:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:23.896 15:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:24.462 true 00:08:24.462 15:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3059275 00:08:24.462 15:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.832 15:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.090 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.090 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.090 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.090 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.090 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.090 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.348 15:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:26.348 15:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:26.606 true 00:08:26.606 15:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3059275 00:08:26.606 15:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.171 15:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.428 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.428 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.428 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.428 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.428 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.428 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.686 15:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:27.686 15:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:27.943 true 00:08:27.943 15:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3059275 00:08:27.943 15:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.506 15:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.021 15:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:29.021 15:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:29.279 true 00:08:29.279 15:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3059275 00:08:29.279 15:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.536 Initializing NVMe Controllers 00:08:29.536 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:29.536 Controller IO queue size 128, less than required. 00:08:29.536 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:29.536 Controller IO queue size 128, less than required. 00:08:29.536 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:29.536 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:29.536 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:29.536 Initialization complete. Launching workers. 00:08:29.536 ======================================================== 00:08:29.536 Latency(us) 00:08:29.536 Device Information : IOPS MiB/s Average min max 00:08:29.536 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5174.32 2.53 19530.57 3052.40 1014074.83 00:08:29.536 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15880.15 7.75 8059.89 2764.69 531680.14 00:08:29.536 ======================================================== 00:08:29.536 Total : 21054.46 10.28 10878.91 2764.69 1014074.83 00:08:29.536 00:08:29.793 15:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.358 15:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:30.358 15:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:30.616 true 00:08:30.616 15:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3059275 00:08:30.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3059275) - No such process 00:08:30.616 15:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3059275 00:08:30.616 15:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.180 15:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:31.744 15:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:31.744 15:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:31.744 15:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:31.744 15:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:31.744 15:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:32.310 null0 00:08:32.310 15:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:32.310 15:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:32.310 15:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:32.879 null1 00:08:32.879 15:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:32.879 15:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:32.879 15:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:33.444 null2 00:08:33.444 15:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:33.444 15:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:33.444 15:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:34.010 null3 00:08:34.010 15:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.010 15:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.010 15:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:34.575 null4 00:08:34.575 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.575 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.575 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:35.139 null5 00:08:35.139 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:35.139 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:35.139 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:35.703 null6 00:08:35.703 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:35.704 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:35.704 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:36.277 null7 00:08:36.277 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:36.277 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:36.277 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:36.277 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.277 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.277 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:36.277 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.277 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:36.277 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.277 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.277 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.277 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:36.277 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.277 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:36.277 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.277 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:36.277 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3063641 3063642 3063644 3063646 3063648 3063650 3063652 3063654 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.278 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:36.537 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.537 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:36.537 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:36.537 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:36.537 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:36.537 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:36.537 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:36.537 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:36.795 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.795 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.795 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:36.795 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.795 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.795 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:36.795 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.795 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.795 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:36.795 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.795 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.795 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:36.795 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.795 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.795 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:36.795 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.795 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.795 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:36.795 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.795 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.795 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:36.795 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.795 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.795 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:37.053 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:37.053 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:37.053 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:37.053 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:37.053 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:37.053 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.053 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:37.053 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.312 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.312 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.312 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:37.312 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.312 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.312 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:37.312 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.312 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.312 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:37.312 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.312 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.312 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:37.312 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.312 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.312 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:37.570 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.570 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.570 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:37.570 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.570 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.570 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:37.570 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.570 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.570 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:37.828 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:37.828 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.828 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:37.828 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:37.828 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.828 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:37.828 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:37.828 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:38.086 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.086 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.086 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:38.086 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.086 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.086 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:38.086 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.086 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.086 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:38.086 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.086 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.086 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:38.086 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.086 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.086 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:38.086 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.086 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.086 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:38.086 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.086 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.086 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:38.086 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.086 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.086 15:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:38.345 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:38.345 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:38.345 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:38.345 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:38.345 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.345 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:38.345 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:38.345 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:38.603 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.603 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.603 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:38.603 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.603 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.603 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:38.603 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.603 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.603 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:38.603 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.603 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.603 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:38.603 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.603 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.603 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:38.603 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.603 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.603 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:38.861 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.861 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.861 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:38.861 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.861 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.861 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:38.861 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:39.120 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:39.120 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:39.120 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:39.120 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:39.120 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:39.120 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.120 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:39.378 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.378 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.378 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:39.378 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.378 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.378 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:39.378 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.378 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.378 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:39.378 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.378 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.378 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:39.378 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.378 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.378 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:39.378 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.378 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.378 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:39.378 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.378 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.378 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:39.378 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.378 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.378 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:39.637 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:39.637 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:39.637 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:39.637 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:39.637 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:39.637 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:39.637 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.637 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:39.895 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.895 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.895 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:39.895 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.895 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.895 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:39.895 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.895 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.895 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:39.896 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.896 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.896 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:39.896 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.896 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.896 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:39.896 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.896 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.896 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:39.896 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.896 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.896 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:39.896 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.896 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.896 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:40.155 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:40.155 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:40.155 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:40.155 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.155 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:40.155 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:40.155 15:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:40.155 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:40.414 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.414 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.414 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:40.414 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.414 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.414 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:40.414 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.414 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.414 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:40.414 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.414 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.414 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:40.414 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.414 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.415 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:40.415 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.415 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.415 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:40.673 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.673 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.673 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:40.673 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.673 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.673 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:40.931 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:40.931 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:40.931 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:40.931 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:40.931 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:40.931 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:40.931 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.931 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:41.189 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.189 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.189 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:41.189 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.189 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.189 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:41.189 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.189 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.189 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:41.189 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.189 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.189 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:41.189 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.190 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.190 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:41.190 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.190 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.190 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:41.190 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.190 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.190 15:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:41.190 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.190 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.190 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:41.447 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:41.447 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:41.447 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:41.447 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.447 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:41.705 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:41.705 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:41.705 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:41.705 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.705 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.705 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:41.705 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.705 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.705 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:42.020 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.020 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.020 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:42.020 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.020 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.020 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:42.020 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.020 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.020 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:42.020 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.020 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.021 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:42.021 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.021 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.021 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:42.021 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.021 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.021 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:42.021 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:42.021 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:42.333 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:42.333 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.333 15:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:42.333 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:42.333 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:42.333 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:42.333 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.333 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.333 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.333 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.592 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.592 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.592 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.592 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.592 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.592 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.592 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.592 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.592 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.592 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.592 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.592 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.592 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:42.592 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:42.592 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:42.592 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:42.592 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:42.592 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:42.592 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:42.592 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:42.592 rmmod nvme_tcp 00:08:42.592 rmmod nvme_fabrics 00:08:42.592 rmmod nvme_keyring 00:08:42.853 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:42.853 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:42.853 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:42.853 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3058666 ']' 00:08:42.853 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3058666 00:08:42.853 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 3058666 ']' 00:08:42.853 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 3058666 00:08:42.853 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:08:42.853 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:42.853 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3058666 00:08:42.853 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:42.853 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:42.853 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3058666' 00:08:42.853 killing process with pid 3058666 00:08:42.853 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 3058666 00:08:42.853 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 3058666 00:08:43.113 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:43.113 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:43.113 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:43.113 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:43.113 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:08:43.113 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:43.113 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:08:43.113 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:43.113 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:43.113 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.113 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.113 15:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.024 15:04:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:45.024 00:08:45.024 real 0m54.315s 00:08:45.024 user 4m5.933s 00:08:45.024 sys 0m18.645s 00:08:45.024 15:04:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.024 15:04:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.024 ************************************ 00:08:45.024 END TEST nvmf_ns_hotplug_stress 00:08:45.024 ************************************ 00:08:45.285 15:04:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:45.285 15:04:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:45.285 15:04:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:45.285 15:04:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:45.285 ************************************ 00:08:45.285 START TEST nvmf_delete_subsystem 00:08:45.285 ************************************ 00:08:45.285 15:04:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:45.285 * Looking for test storage... 00:08:45.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1689 -- # lcov --version 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:45.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.286 --rc genhtml_branch_coverage=1 00:08:45.286 --rc genhtml_function_coverage=1 00:08:45.286 --rc genhtml_legend=1 00:08:45.286 --rc geninfo_all_blocks=1 00:08:45.286 --rc geninfo_unexecuted_blocks=1 00:08:45.286 00:08:45.286 ' 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:45.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.286 --rc genhtml_branch_coverage=1 00:08:45.286 --rc genhtml_function_coverage=1 00:08:45.286 --rc genhtml_legend=1 00:08:45.286 --rc geninfo_all_blocks=1 00:08:45.286 --rc geninfo_unexecuted_blocks=1 00:08:45.286 00:08:45.286 ' 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:45.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.286 --rc genhtml_branch_coverage=1 00:08:45.286 --rc genhtml_function_coverage=1 00:08:45.286 --rc genhtml_legend=1 00:08:45.286 --rc geninfo_all_blocks=1 00:08:45.286 --rc geninfo_unexecuted_blocks=1 00:08:45.286 00:08:45.286 ' 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:45.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.286 --rc genhtml_branch_coverage=1 00:08:45.286 --rc genhtml_function_coverage=1 00:08:45.286 --rc genhtml_legend=1 00:08:45.286 --rc geninfo_all_blocks=1 00:08:45.286 --rc geninfo_unexecuted_blocks=1 00:08:45.286 00:08:45.286 ' 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.286 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:45.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:45.287 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:45.287 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:45.287 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:45.287 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:45.287 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:45.287 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.287 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:45.287 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:45.287 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:45.287 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.287 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.287 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.287 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:45.287 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:45.287 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:45.287 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:48.578 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:48.579 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:48.579 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:48.579 Found net devices under 0000:84:00.0: cvl_0_0 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:48.579 Found net devices under 0000:84:00.1: cvl_0_1 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:48.579 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:48.579 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:48.579 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:48.579 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:48.579 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:48.579 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:48.579 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:48.579 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:48.579 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:48.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:08:48.579 00:08:48.579 --- 10.0.0.2 ping statistics --- 00:08:48.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.579 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:08:48.579 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:48.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:08:48.579 00:08:48.579 --- 10.0.0.1 ping statistics --- 00:08:48.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.579 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:08:48.579 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.579 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:48.579 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:48.579 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.579 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:48.579 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:48.579 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.579 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:48.579 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:48.579 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:48.579 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:48.579 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:48.579 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:48.579 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3066578 00:08:48.579 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:48.579 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3066578 00:08:48.579 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 3066578 ']' 00:08:48.579 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.579 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:48.579 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.579 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:48.579 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:48.579 [2024-10-28 15:04:35.294858] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:08:48.579 [2024-10-28 15:04:35.294974] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.839 [2024-10-28 15:04:35.476772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:48.839 [2024-10-28 15:04:35.595459] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.839 [2024-10-28 15:04:35.595593] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.839 [2024-10-28 15:04:35.595633] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.839 [2024-10-28 15:04:35.595682] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.839 [2024-10-28 15:04:35.595710] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.839 [2024-10-28 15:04:35.598758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.839 [2024-10-28 15:04:35.598775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.098 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:49.098 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:08:49.098 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:49.098 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:49.098 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:49.098 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.098 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:49.098 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.098 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:49.098 [2024-10-28 15:04:35.862238] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.098 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.098 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:49.098 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.098 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:49.098 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.098 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.098 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.098 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:49.098 [2024-10-28 15:04:35.886881] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.098 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.098 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:49.098 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.098 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:49.098 NULL1 00:08:49.098 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.098 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:49.098 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.098 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:49.098 Delay0 00:08:49.099 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.099 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.099 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.099 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:49.099 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.099 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3066721 00:08:49.099 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:49.099 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:49.357 [2024-10-28 15:04:36.034192] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:51.256 15:04:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:51.256 15:04:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.256 15:04:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:51.514 Read completed with error (sct=0, sc=8) 00:08:51.514 Read completed with error (sct=0, sc=8) 00:08:51.514 starting I/O failed: -6 00:08:51.514 Read completed with error (sct=0, sc=8) 00:08:51.514 Read completed with error (sct=0, sc=8) 00:08:51.514 Read completed with error (sct=0, sc=8) 00:08:51.514 Read completed with error (sct=0, sc=8) 00:08:51.514 starting I/O failed: -6 00:08:51.514 Write completed with error (sct=0, sc=8) 00:08:51.514 Write completed with error (sct=0, sc=8) 00:08:51.514 Read completed with error (sct=0, sc=8) 00:08:51.514 Write completed with error (sct=0, sc=8) 00:08:51.514 starting I/O failed: -6 00:08:51.514 Write completed with error (sct=0, sc=8) 00:08:51.514 Write completed with error (sct=0, sc=8) 00:08:51.514 Read completed with error (sct=0, sc=8) 00:08:51.514 Read completed with error (sct=0, sc=8) 00:08:51.514 starting I/O failed: -6 00:08:51.514 Read completed with error (sct=0, sc=8) 00:08:51.514 Write completed with error (sct=0, sc=8) 00:08:51.514 Read completed with error (sct=0, sc=8) 00:08:51.514 Write completed with error (sct=0, sc=8) 00:08:51.514 starting I/O failed: -6 00:08:51.514 Write completed with error (sct=0, sc=8) 00:08:51.514 Read completed with error (sct=0, sc=8) 00:08:51.514 Write completed with error (sct=0, sc=8) 00:08:51.514 Read completed with error (sct=0, sc=8) 00:08:51.514 starting I/O failed: -6 00:08:51.514 Write completed with error (sct=0, sc=8) 00:08:51.514 Read completed with error (sct=0, sc=8) 00:08:51.514 Read completed with error (sct=0, sc=8) 00:08:51.514 Read completed with error (sct=0, sc=8) 00:08:51.514 starting I/O failed: -6 00:08:51.514 Read completed with error (sct=0, sc=8) 00:08:51.514 Read completed with error (sct=0, sc=8) 00:08:51.514 Write completed with error (sct=0, sc=8) 00:08:51.514 Read completed with error (sct=0, sc=8) 00:08:51.514 starting I/O failed: -6 00:08:51.514 Read completed with error (sct=0, sc=8) 00:08:51.514 Write completed with error (sct=0, sc=8) 00:08:51.514 Read completed with error (sct=0, sc=8) 00:08:51.514 Write completed with error (sct=0, sc=8) 00:08:51.514 starting I/O failed: -6 00:08:51.514 Read completed with error (sct=0, sc=8) 00:08:51.514 Read completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 starting I/O failed: -6 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 starting I/O failed: -6 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 starting I/O failed: -6 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 starting I/O failed: -6 00:08:51.515 [2024-10-28 15:04:38.248504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f9680 is same with the state(6) to be set 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 starting I/O failed: -6 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 starting I/O failed: -6 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 starting I/O failed: -6 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 starting I/O failed: -6 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 starting I/O failed: -6 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 starting I/O failed: -6 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 starting I/O failed: -6 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 starting I/O failed: -6 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 starting I/O failed: -6 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 [2024-10-28 15:04:38.249049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb694000c00 is same with the state(6) to be set 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.515 Write completed with error (sct=0, sc=8) 00:08:51.515 Read completed with error (sct=0, sc=8) 00:08:51.516 Write completed with error (sct=0, sc=8) 00:08:51.516 Read completed with error (sct=0, sc=8) 00:08:51.516 Read completed with error (sct=0, sc=8) 00:08:51.516 Read completed with error (sct=0, sc=8) 00:08:51.516 Read completed with error (sct=0, sc=8) 00:08:51.516 Read completed with error (sct=0, sc=8) 00:08:51.516 Read completed with error (sct=0, sc=8) 00:08:51.516 Read completed with error (sct=0, sc=8) 00:08:51.516 Write completed with error (sct=0, sc=8) 00:08:51.516 Write completed with error (sct=0, sc=8) 00:08:51.516 Write completed with error (sct=0, sc=8) 00:08:51.516 Write completed with error (sct=0, sc=8) 00:08:51.516 Read completed with error (sct=0, sc=8) 00:08:51.516 Read completed with error (sct=0, sc=8) 00:08:51.516 Write completed with error (sct=0, sc=8) 00:08:51.516 Read completed with error (sct=0, sc=8) 00:08:52.449 [2024-10-28 15:04:39.215509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fa9a0 is same with the state(6) to be set 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 [2024-10-28 15:04:39.250876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f92c0 is same with the state(6) to be set 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 [2024-10-28 15:04:39.251107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f9860 is same with the state(6) to be set 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 [2024-10-28 15:04:39.251332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f94a0 is same with the state(6) to be set 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Write completed with error (sct=0, sc=8) 00:08:52.449 Read completed with error (sct=0, sc=8) 00:08:52.449 [2024-10-28 15:04:39.252075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb69400d310 is same with the state(6) to be set 00:08:52.449 Initializing NVMe Controllers 00:08:52.449 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:52.449 Controller IO queue size 128, less than required. 00:08:52.449 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:52.449 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:52.449 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:52.449 Initialization complete. Launching workers. 00:08:52.449 ======================================================== 00:08:52.449 Latency(us) 00:08:52.449 Device Information : IOPS MiB/s Average min max 00:08:52.449 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 181.48 0.09 956252.42 859.86 1014028.74 00:08:52.449 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.21 0.08 881282.70 326.70 1014015.30 00:08:52.449 ======================================================== 00:08:52.449 Total : 335.69 0.16 921812.86 326.70 1014028.74 00:08:52.449 00:08:52.449 [2024-10-28 15:04:39.253551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21fa9a0 (9): Bad file descriptor 00:08:52.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:52.450 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.450 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:52.450 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3066721 00:08:52.450 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:53.021 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:53.021 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3066721 00:08:53.021 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3066721) - No such process 00:08:53.021 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3066721 00:08:53.021 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:53.021 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3066721 00:08:53.021 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:53.021 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.021 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:53.021 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.021 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3066721 00:08:53.021 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:53.021 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:53.021 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:53.021 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:53.021 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:53.021 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.021 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.021 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.021 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:53.021 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.021 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.021 [2024-10-28 15:04:39.777836] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.021 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.021 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.021 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.021 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.021 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.021 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:53.021 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3067138 00:08:53.021 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:53.021 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3067138 00:08:53.021 15:04:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:53.021 [2024-10-28 15:04:39.868768] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:53.589 15:04:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:53.589 15:04:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3067138 00:08:53.589 15:04:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:54.154 15:04:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:54.154 15:04:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3067138 00:08:54.154 15:04:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:54.719 15:04:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:54.719 15:04:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3067138 00:08:54.719 15:04:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:54.977 15:04:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:54.977 15:04:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3067138 00:08:54.977 15:04:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:55.543 15:04:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:55.543 15:04:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3067138 00:08:55.543 15:04:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:56.109 15:04:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:56.109 15:04:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3067138 00:08:56.109 15:04:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:56.366 Initializing NVMe Controllers 00:08:56.366 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:56.366 Controller IO queue size 128, less than required. 00:08:56.366 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:56.366 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:56.366 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:56.366 Initialization complete. Launching workers. 00:08:56.366 ======================================================== 00:08:56.366 Latency(us) 00:08:56.366 Device Information : IOPS MiB/s Average min max 00:08:56.366 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004539.72 1000248.08 1041794.57 00:08:56.366 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004693.60 1000246.66 1041724.74 00:08:56.366 ======================================================== 00:08:56.366 Total : 256.00 0.12 1004616.66 1000246.66 1041794.57 00:08:56.366 00:08:56.625 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:56.625 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3067138 00:08:56.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3067138) - No such process 00:08:56.625 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3067138 00:08:56.625 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:56.625 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:56.625 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:56.625 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:56.625 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:56.625 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:56.625 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:56.625 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:56.625 rmmod nvme_tcp 00:08:56.625 rmmod nvme_fabrics 00:08:56.625 rmmod nvme_keyring 00:08:56.625 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:56.625 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:56.625 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:56.625 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3066578 ']' 00:08:56.625 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3066578 00:08:56.625 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 3066578 ']' 00:08:56.625 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 3066578 00:08:56.625 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:08:56.625 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:56.625 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3066578 00:08:56.625 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:56.625 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:56.625 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3066578' 00:08:56.625 killing process with pid 3066578 00:08:56.625 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 3066578 00:08:56.625 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 3066578 00:08:56.884 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:56.884 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:56.884 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:56.884 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:56.884 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:56.884 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:57.142 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:57.142 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:57.142 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:57.142 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.142 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.142 15:04:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.046 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:59.046 00:08:59.046 real 0m13.868s 00:08:59.046 user 0m29.245s 00:08:59.046 sys 0m3.954s 00:08:59.046 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.046 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.046 ************************************ 00:08:59.046 END TEST nvmf_delete_subsystem 00:08:59.046 ************************************ 00:08:59.046 15:04:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:59.046 15:04:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:59.046 15:04:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.046 15:04:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:59.046 ************************************ 00:08:59.046 START TEST nvmf_host_management 00:08:59.046 ************************************ 00:08:59.046 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:59.306 * Looking for test storage... 00:08:59.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:59.306 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:59.306 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1689 -- # lcov --version 00:08:59.306 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:59.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.306 --rc genhtml_branch_coverage=1 00:08:59.306 --rc genhtml_function_coverage=1 00:08:59.306 --rc genhtml_legend=1 00:08:59.306 --rc geninfo_all_blocks=1 00:08:59.306 --rc geninfo_unexecuted_blocks=1 00:08:59.306 00:08:59.306 ' 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:59.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.306 --rc genhtml_branch_coverage=1 00:08:59.306 --rc genhtml_function_coverage=1 00:08:59.306 --rc genhtml_legend=1 00:08:59.306 --rc geninfo_all_blocks=1 00:08:59.306 --rc geninfo_unexecuted_blocks=1 00:08:59.306 00:08:59.306 ' 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:59.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.306 --rc genhtml_branch_coverage=1 00:08:59.306 --rc genhtml_function_coverage=1 00:08:59.306 --rc genhtml_legend=1 00:08:59.306 --rc geninfo_all_blocks=1 00:08:59.306 --rc geninfo_unexecuted_blocks=1 00:08:59.306 00:08:59.306 ' 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:59.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.306 --rc genhtml_branch_coverage=1 00:08:59.306 --rc genhtml_function_coverage=1 00:08:59.306 --rc genhtml_legend=1 00:08:59.306 --rc geninfo_all_blocks=1 00:08:59.306 --rc geninfo_unexecuted_blocks=1 00:08:59.306 00:08:59.306 ' 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:59.306 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.307 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:59.307 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:59.307 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:59.307 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:59.307 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:59.307 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:59.307 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:59.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:59.307 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:59.307 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:59.307 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:59.307 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:59.307 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:59.307 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:59.307 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:59.307 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:59.307 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:59.307 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:59.307 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:59.307 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.307 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.307 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.307 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:59.307 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:59.307 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:59.307 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:02.597 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:02.597 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:02.597 Found net devices under 0000:84:00.0: cvl_0_0 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:02.597 Found net devices under 0000:84:00.1: cvl_0_1 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:02.597 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:02.598 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:02.598 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:02.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:09:02.598 00:09:02.598 --- 10.0.0.2 ping statistics --- 00:09:02.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.598 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:09:02.598 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:02.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:09:02.598 00:09:02.598 --- 10.0.0.1 ping statistics --- 00:09:02.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.598 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:09:02.598 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.598 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:09:02.598 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:02.598 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.598 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:02.598 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:02.598 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.598 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:02.598 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:02.598 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:02.598 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:02.598 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:02.598 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:02.598 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:02.598 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:02.598 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3069631 00:09:02.598 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:02.598 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3069631 00:09:02.598 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3069631 ']' 00:09:02.598 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.598 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:02.598 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.598 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:02.598 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:02.598 [2024-10-28 15:04:49.426150] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:09:02.598 [2024-10-28 15:04:49.426325] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.857 [2024-10-28 15:04:49.609184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:03.115 [2024-10-28 15:04:49.737446] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.115 [2024-10-28 15:04:49.737550] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.115 [2024-10-28 15:04:49.737587] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.115 [2024-10-28 15:04:49.737625] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.115 [2024-10-28 15:04:49.737668] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.115 [2024-10-28 15:04:49.741291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:03.115 [2024-10-28 15:04:49.741395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:03.116 [2024-10-28 15:04:49.741447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:03.116 [2024-10-28 15:04:49.741450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.116 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:03.116 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:03.116 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:03.116 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:03.116 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:03.116 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.116 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:03.116 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.116 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:03.116 [2024-10-28 15:04:49.905272] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.116 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.116 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:03.116 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:03.116 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:03.116 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:03.116 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:03.116 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:03.116 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.116 15:04:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:03.116 Malloc0 00:09:03.373 [2024-10-28 15:04:49.990730] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.373 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.373 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:03.373 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:03.373 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:03.373 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3069798 00:09:03.373 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3069798 /var/tmp/bdevperf.sock 00:09:03.373 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3069798 ']' 00:09:03.373 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:03.373 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:03.373 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:03.373 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:03.373 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:03.373 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:03.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:03.373 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:03.373 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:03.373 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:03.373 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:03.373 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:03.373 { 00:09:03.373 "params": { 00:09:03.373 "name": "Nvme$subsystem", 00:09:03.373 "trtype": "$TEST_TRANSPORT", 00:09:03.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:03.373 "adrfam": "ipv4", 00:09:03.373 "trsvcid": "$NVMF_PORT", 00:09:03.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:03.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:03.373 "hdgst": ${hdgst:-false}, 00:09:03.373 "ddgst": ${ddgst:-false} 00:09:03.373 }, 00:09:03.373 "method": "bdev_nvme_attach_controller" 00:09:03.373 } 00:09:03.373 EOF 00:09:03.373 )") 00:09:03.373 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:03.373 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:03.373 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:03.373 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:03.373 "params": { 00:09:03.373 "name": "Nvme0", 00:09:03.373 "trtype": "tcp", 00:09:03.373 "traddr": "10.0.0.2", 00:09:03.373 "adrfam": "ipv4", 00:09:03.373 "trsvcid": "4420", 00:09:03.373 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:03.373 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:03.373 "hdgst": false, 00:09:03.373 "ddgst": false 00:09:03.373 }, 00:09:03.373 "method": "bdev_nvme_attach_controller" 00:09:03.373 }' 00:09:03.373 [2024-10-28 15:04:50.078071] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:09:03.373 [2024-10-28 15:04:50.078169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3069798 ] 00:09:03.373 [2024-10-28 15:04:50.153099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.373 [2024-10-28 15:04:50.214364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.631 Running I/O for 10 seconds... 00:09:03.890 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:03.890 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:03.890 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:03.890 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.890 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:03.890 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.890 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:03.890 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:03.890 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:03.890 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:03.890 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:03.890 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:03.890 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:03.890 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:03.890 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:03.890 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.890 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:03.890 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:03.890 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.890 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:09:03.890 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:09:03.890 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:09:04.151 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:09:04.151 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:04.151 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:04.151 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:04.151 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.151 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.151 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.151 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:09:04.151 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:09:04.151 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:04.151 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:04.151 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:04.151 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:04.151 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.151 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.151 [2024-10-28 15:04:50.922116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223a5d0 is same with the state(6) to be set 00:09:04.151 [2024-10-28 15:04:50.922223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223a5d0 is same with the state(6) to be set 00:09:04.151 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.151 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:04.151 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.151 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.151 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.151 15:04:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:04.151 [2024-10-28 15:04:50.941027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.151 [2024-10-28 15:04:50.941075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.151 [2024-10-28 15:04:50.941104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.151 [2024-10-28 15:04:50.941118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.151 [2024-10-28 15:04:50.941132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.151 [2024-10-28 15:04:50.941146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.151 [2024-10-28 15:04:50.941161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.151 [2024-10-28 15:04:50.941174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.151 [2024-10-28 15:04:50.941187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239f040 is same with the state(6) to be set 00:09:04.151 [2024-10-28 15:04:50.941289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.151 [2024-10-28 15:04:50.941312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.151 [2024-10-28 15:04:50.941337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.151 [2024-10-28 15:04:50.941353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.151 [2024-10-28 15:04:50.941370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.151 [2024-10-28 15:04:50.941384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.151 [2024-10-28 15:04:50.941400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.151 [2024-10-28 15:04:50.941424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.151 [2024-10-28 15:04:50.941440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.151 [2024-10-28 15:04:50.941453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.941468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.941483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.941499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.941512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.941527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.941540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.941556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.941570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.941586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.941601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.941617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.941645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.941672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.941687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.941703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.941718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.941736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.941750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.941766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.941780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.941796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.941810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.941835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.941850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.941866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.941880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.941896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.941910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.941926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.941940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.941971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.941985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.942000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.942013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.942028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.942042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.942057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.942071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.942087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.942100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.942116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.942129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.942145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.942158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.942173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.942186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.942201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.942217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.942233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.942247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.942262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.942275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.942289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.942302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.942318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.942333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.942348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.942361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.942376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.942390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.942405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.942419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.942434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.942447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.942462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.942475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.942490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.942503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.942518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.942531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.942546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.942560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.942578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.942593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.942608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.942621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.942659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.152 [2024-10-28 15:04:50.942675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.152 [2024-10-28 15:04:50.942691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.153 [2024-10-28 15:04:50.942705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.153 [2024-10-28 15:04:50.942720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.153 [2024-10-28 15:04:50.942734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.153 [2024-10-28 15:04:50.942749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.153 [2024-10-28 15:04:50.942763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.153 [2024-10-28 15:04:50.942779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.153 [2024-10-28 15:04:50.942792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.153 [2024-10-28 15:04:50.942808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.153 [2024-10-28 15:04:50.942822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.153 [2024-10-28 15:04:50.942838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.153 [2024-10-28 15:04:50.942851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.153 [2024-10-28 15:04:50.942867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.153 [2024-10-28 15:04:50.942894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.153 [2024-10-28 15:04:50.942911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.153 [2024-10-28 15:04:50.942925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.153 [2024-10-28 15:04:50.942940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.153 [2024-10-28 15:04:50.942968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.153 [2024-10-28 15:04:50.942984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.153 [2024-10-28 15:04:50.943001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.153 [2024-10-28 15:04:50.943016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.153 [2024-10-28 15:04:50.943029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.153 [2024-10-28 15:04:50.943045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.153 [2024-10-28 15:04:50.943058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.153 [2024-10-28 15:04:50.943072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.153 [2024-10-28 15:04:50.943085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.153 [2024-10-28 15:04:50.943100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.153 [2024-10-28 15:04:50.943113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.153 [2024-10-28 15:04:50.943128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.153 [2024-10-28 15:04:50.943142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.153 [2024-10-28 15:04:50.943156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.153 [2024-10-28 15:04:50.943169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.153 [2024-10-28 15:04:50.943185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.153 [2024-10-28 15:04:50.943198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.153 [2024-10-28 15:04:50.943213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.153 [2024-10-28 15:04:50.943225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.153 [2024-10-28 15:04:50.943240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.153 [2024-10-28 15:04:50.943253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.153 [2024-10-28 15:04:50.943269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.153 [2024-10-28 15:04:50.943282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.153 [2024-10-28 15:04:50.944479] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:09:04.153 task offset: 90112 on job bdev=Nvme0n1 fails 00:09:04.153 00:09:04.153 Latency(us) 00:09:04.153 [2024-10-28T14:04:51.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.153 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:04.153 Job: Nvme0n1 ended in about 0.46 seconds with error 00:09:04.153 Verification LBA range: start 0x0 length 0x400 00:09:04.153 Nvme0n1 : 0.46 1525.86 95.37 138.71 0.00 37454.92 2682.12 35146.71 00:09:04.153 [2024-10-28T14:04:51.020Z] =================================================================================================================== 00:09:04.153 [2024-10-28T14:04:51.020Z] Total : 1525.86 95.37 138.71 0.00 37454.92 2682.12 35146.71 00:09:04.153 [2024-10-28 15:04:50.947436] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:04.153 [2024-10-28 15:04:50.947470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x239f040 (9): Bad file descriptor 00:09:04.153 [2024-10-28 15:04:50.997123] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:09:05.087 15:04:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3069798 00:09:05.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3069798) - No such process 00:09:05.087 15:04:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:05.087 15:04:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:05.087 15:04:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:05.087 15:04:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:05.087 15:04:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:05.087 15:04:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:05.087 15:04:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:05.087 15:04:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:05.087 { 00:09:05.087 "params": { 00:09:05.087 "name": "Nvme$subsystem", 00:09:05.087 "trtype": "$TEST_TRANSPORT", 00:09:05.087 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:05.087 "adrfam": "ipv4", 00:09:05.087 "trsvcid": "$NVMF_PORT", 00:09:05.087 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:05.087 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:05.087 "hdgst": ${hdgst:-false}, 00:09:05.087 "ddgst": ${ddgst:-false} 00:09:05.087 }, 00:09:05.087 "method": "bdev_nvme_attach_controller" 00:09:05.087 } 00:09:05.087 EOF 00:09:05.087 )") 00:09:05.087 15:04:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:05.087 15:04:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:05.087 15:04:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:05.087 15:04:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:05.087 "params": { 00:09:05.087 "name": "Nvme0", 00:09:05.087 "trtype": "tcp", 00:09:05.087 "traddr": "10.0.0.2", 00:09:05.087 "adrfam": "ipv4", 00:09:05.087 "trsvcid": "4420", 00:09:05.087 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:05.087 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:05.087 "hdgst": false, 00:09:05.087 "ddgst": false 00:09:05.087 }, 00:09:05.087 "method": "bdev_nvme_attach_controller" 00:09:05.087 }' 00:09:05.345 [2024-10-28 15:04:51.992102] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:09:05.345 [2024-10-28 15:04:51.992190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3069977 ] 00:09:05.345 [2024-10-28 15:04:52.068140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.345 [2024-10-28 15:04:52.128267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.602 Running I/O for 1 seconds... 00:09:06.791 1561.00 IOPS, 97.56 MiB/s 00:09:06.791 Latency(us) 00:09:06.791 [2024-10-28T14:04:53.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.791 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:06.791 Verification LBA range: start 0x0 length 0x400 00:09:06.791 Nvme0n1 : 1.04 1600.08 100.00 0.00 0.00 39366.83 6068.15 34175.81 00:09:06.791 [2024-10-28T14:04:53.658Z] =================================================================================================================== 00:09:06.791 [2024-10-28T14:04:53.658Z] Total : 1600.08 100.00 0.00 0.00 39366.83 6068.15 34175.81 00:09:06.791 15:04:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:06.791 15:04:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:06.791 15:04:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:09:06.791 15:04:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:06.791 15:04:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:06.791 15:04:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:06.791 15:04:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:06.791 15:04:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:06.791 15:04:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:06.791 15:04:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:06.791 15:04:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:07.049 rmmod nvme_tcp 00:09:07.049 rmmod nvme_fabrics 00:09:07.049 rmmod nvme_keyring 00:09:07.049 15:04:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:07.049 15:04:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:07.049 15:04:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:07.049 15:04:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3069631 ']' 00:09:07.049 15:04:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3069631 00:09:07.049 15:04:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 3069631 ']' 00:09:07.049 15:04:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 3069631 00:09:07.049 15:04:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:09:07.049 15:04:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:07.049 15:04:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3069631 00:09:07.049 15:04:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:07.049 15:04:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:07.049 15:04:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3069631' 00:09:07.049 killing process with pid 3069631 00:09:07.050 15:04:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 3069631 00:09:07.050 15:04:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 3069631 00:09:07.310 [2024-10-28 15:04:54.057353] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:07.310 15:04:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:07.310 15:04:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:07.310 15:04:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:07.310 15:04:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:09:07.310 15:04:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:09:07.310 15:04:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:07.310 15:04:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:09:07.310 15:04:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:07.310 15:04:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:07.310 15:04:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.310 15:04:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.310 15:04:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:09.853 00:09:09.853 real 0m10.305s 00:09:09.853 user 0m21.059s 00:09:09.853 sys 0m3.844s 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:09.853 ************************************ 00:09:09.853 END TEST nvmf_host_management 00:09:09.853 ************************************ 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:09.853 ************************************ 00:09:09.853 START TEST nvmf_lvol 00:09:09.853 ************************************ 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:09.853 * Looking for test storage... 00:09:09.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1689 -- # lcov --version 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:09:09.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.853 --rc genhtml_branch_coverage=1 00:09:09.853 --rc genhtml_function_coverage=1 00:09:09.853 --rc genhtml_legend=1 00:09:09.853 --rc geninfo_all_blocks=1 00:09:09.853 --rc geninfo_unexecuted_blocks=1 00:09:09.853 00:09:09.853 ' 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:09:09.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.853 --rc genhtml_branch_coverage=1 00:09:09.853 --rc genhtml_function_coverage=1 00:09:09.853 --rc genhtml_legend=1 00:09:09.853 --rc geninfo_all_blocks=1 00:09:09.853 --rc geninfo_unexecuted_blocks=1 00:09:09.853 00:09:09.853 ' 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:09:09.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.853 --rc genhtml_branch_coverage=1 00:09:09.853 --rc genhtml_function_coverage=1 00:09:09.853 --rc genhtml_legend=1 00:09:09.853 --rc geninfo_all_blocks=1 00:09:09.853 --rc geninfo_unexecuted_blocks=1 00:09:09.853 00:09:09.853 ' 00:09:09.853 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:09:09.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.853 --rc genhtml_branch_coverage=1 00:09:09.853 --rc genhtml_function_coverage=1 00:09:09.853 --rc genhtml_legend=1 00:09:09.853 --rc geninfo_all_blocks=1 00:09:09.854 --rc geninfo_unexecuted_blocks=1 00:09:09.854 00:09:09.854 ' 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:09.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:09:09.854 15:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:13.171 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:13.171 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:09:13.171 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:13.171 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:13.171 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:13.171 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:13.171 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:13.171 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:09:13.171 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:13.171 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:09:13.171 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:09:13.171 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:09:13.171 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:09:13.171 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:09:13.171 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:09:13.171 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:13.171 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:13.171 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:13.171 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:13.171 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:13.171 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:13.171 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:13.171 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:13.171 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:13.171 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:13.171 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:13.171 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:13.172 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:13.172 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:13.172 Found net devices under 0000:84:00.0: cvl_0_0 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:13.172 Found net devices under 0000:84:00.1: cvl_0_1 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:13.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:13.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:09:13.172 00:09:13.172 --- 10.0.0.2 ping statistics --- 00:09:13.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.172 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:13.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:13.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:09:13.172 00:09:13.172 --- 10.0.0.1 ping statistics --- 00:09:13.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.172 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3072309 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3072309 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 3072309 ']' 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:13.172 15:04:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:13.172 [2024-10-28 15:04:59.762566] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:09:13.172 [2024-10-28 15:04:59.762678] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.172 [2024-10-28 15:04:59.902167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:13.172 [2024-10-28 15:05:00.016781] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:13.172 [2024-10-28 15:05:00.016892] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:13.172 [2024-10-28 15:05:00.016929] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:13.172 [2024-10-28 15:05:00.016961] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:13.172 [2024-10-28 15:05:00.017002] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:13.172 [2024-10-28 15:05:00.019968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.172 [2024-10-28 15:05:00.020067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:13.172 [2024-10-28 15:05:00.020077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.551 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:14.551 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:09:14.551 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:14.551 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:14.551 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:14.551 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:14.551 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:14.809 [2024-10-28 15:05:01.657545] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.069 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:15.638 15:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:15.638 15:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:16.206 15:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:16.206 15:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:16.470 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:17.086 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=59dfcdc6-be2e-410c-beb4-2d5b65dc2697 00:09:17.086 15:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 59dfcdc6-be2e-410c-beb4-2d5b65dc2697 lvol 20 00:09:17.360 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=2de664b5-ad05-456a-b356-f0d42b555097 00:09:17.360 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:17.927 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2de664b5-ad05-456a-b356-f0d42b555097 00:09:18.185 15:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:18.443 [2024-10-28 15:05:05.183873] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:18.443 15:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:19.011 15:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3073044 00:09:19.011 15:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:19.011 15:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:20.387 15:05:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 2de664b5-ad05-456a-b356-f0d42b555097 MY_SNAPSHOT 00:09:20.646 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=839ba561-9f0b-483d-a0dd-063a1555b949 00:09:20.646 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 2de664b5-ad05-456a-b356-f0d42b555097 30 00:09:21.212 15:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 839ba561-9f0b-483d-a0dd-063a1555b949 MY_CLONE 00:09:21.469 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=dbe8f65a-4e99-4ea8-9562-e3b89fc87026 00:09:21.469 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate dbe8f65a-4e99-4ea8-9562-e3b89fc87026 00:09:22.401 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3073044 00:09:30.520 Initializing NVMe Controllers 00:09:30.520 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:30.520 Controller IO queue size 128, less than required. 00:09:30.520 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:30.520 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:30.520 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:30.520 Initialization complete. Launching workers. 00:09:30.520 ======================================================== 00:09:30.520 Latency(us) 00:09:30.520 Device Information : IOPS MiB/s Average min max 00:09:30.520 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10347.50 40.42 12374.11 2068.45 139550.36 00:09:30.521 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10271.40 40.12 12460.62 2145.67 63542.83 00:09:30.521 ======================================================== 00:09:30.521 Total : 20618.90 80.54 12417.21 2068.45 139550.36 00:09:30.521 00:09:30.521 15:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:30.521 15:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2de664b5-ad05-456a-b356-f0d42b555097 00:09:30.780 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 59dfcdc6-be2e-410c-beb4-2d5b65dc2697 00:09:31.040 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:31.040 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:31.040 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:31.040 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:31.040 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:31.040 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:31.041 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:31.041 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:31.041 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:31.041 rmmod nvme_tcp 00:09:31.041 rmmod nvme_fabrics 00:09:31.299 rmmod nvme_keyring 00:09:31.299 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:31.299 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:31.299 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:31.299 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3072309 ']' 00:09:31.299 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3072309 00:09:31.299 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 3072309 ']' 00:09:31.299 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 3072309 00:09:31.299 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:09:31.299 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:31.299 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3072309 00:09:31.299 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:31.299 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:31.299 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3072309' 00:09:31.299 killing process with pid 3072309 00:09:31.299 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 3072309 00:09:31.299 15:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 3072309 00:09:31.559 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:31.559 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:31.559 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:31.559 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:31.559 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:09:31.559 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:31.559 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:09:31.559 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:31.559 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:31.559 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.559 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.559 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:34.111 00:09:34.111 real 0m24.185s 00:09:34.111 user 1m20.360s 00:09:34.111 sys 0m6.987s 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:34.111 ************************************ 00:09:34.111 END TEST nvmf_lvol 00:09:34.111 ************************************ 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:34.111 ************************************ 00:09:34.111 START TEST nvmf_lvs_grow 00:09:34.111 ************************************ 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:34.111 * Looking for test storage... 00:09:34.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1689 -- # lcov --version 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:09:34.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.111 --rc genhtml_branch_coverage=1 00:09:34.111 --rc genhtml_function_coverage=1 00:09:34.111 --rc genhtml_legend=1 00:09:34.111 --rc geninfo_all_blocks=1 00:09:34.111 --rc geninfo_unexecuted_blocks=1 00:09:34.111 00:09:34.111 ' 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:09:34.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.111 --rc genhtml_branch_coverage=1 00:09:34.111 --rc genhtml_function_coverage=1 00:09:34.111 --rc genhtml_legend=1 00:09:34.111 --rc geninfo_all_blocks=1 00:09:34.111 --rc geninfo_unexecuted_blocks=1 00:09:34.111 00:09:34.111 ' 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:09:34.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.111 --rc genhtml_branch_coverage=1 00:09:34.111 --rc genhtml_function_coverage=1 00:09:34.111 --rc genhtml_legend=1 00:09:34.111 --rc geninfo_all_blocks=1 00:09:34.111 --rc geninfo_unexecuted_blocks=1 00:09:34.111 00:09:34.111 ' 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:09:34.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.111 --rc genhtml_branch_coverage=1 00:09:34.111 --rc genhtml_function_coverage=1 00:09:34.111 --rc genhtml_legend=1 00:09:34.111 --rc geninfo_all_blocks=1 00:09:34.111 --rc geninfo_unexecuted_blocks=1 00:09:34.111 00:09:34.111 ' 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.111 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:34.112 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:34.112 15:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:37.407 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:37.407 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:37.407 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:37.408 Found net devices under 0000:84:00.0: cvl_0_0 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:37.408 Found net devices under 0000:84:00.1: cvl_0_1 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:37.408 15:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:37.408 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:37.408 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:37.408 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:37.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:37.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:09:37.408 00:09:37.408 --- 10.0.0.2 ping statistics --- 00:09:37.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.408 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:09:37.408 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:37.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:37.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:09:37.408 00:09:37.408 --- 10.0.0.1 ping statistics --- 00:09:37.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.408 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:09:37.408 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:37.408 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:09:37.408 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:37.408 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:37.408 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:37.408 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:37.408 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:37.408 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:37.408 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:37.408 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:37.408 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:37.408 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:37.408 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:37.408 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3076587 00:09:37.408 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:37.408 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3076587 00:09:37.408 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 3076587 ']' 00:09:37.408 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.408 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:37.408 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.408 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:37.408 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:37.408 [2024-10-28 15:05:24.143702] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:09:37.408 [2024-10-28 15:05:24.143819] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:37.667 [2024-10-28 15:05:24.289979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.667 [2024-10-28 15:05:24.401110] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:37.667 [2024-10-28 15:05:24.401211] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:37.667 [2024-10-28 15:05:24.401248] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:37.667 [2024-10-28 15:05:24.401279] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:37.667 [2024-10-28 15:05:24.401305] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:37.668 [2024-10-28 15:05:24.402554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.927 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:37.927 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:09:37.927 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:37.927 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:37.927 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:37.927 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:37.927 15:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:38.497 [2024-10-28 15:05:25.267186] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:38.497 15:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:38.497 15:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:38.497 15:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.497 15:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:38.497 ************************************ 00:09:38.497 START TEST lvs_grow_clean 00:09:38.497 ************************************ 00:09:38.497 15:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:09:38.497 15:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:38.497 15:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:38.497 15:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:38.497 15:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:38.497 15:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:38.497 15:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:38.497 15:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:38.497 15:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:38.497 15:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:39.066 15:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:39.066 15:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:39.636 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=df5ba7ad-87a2-49a5-8b32-82890f89c49b 00:09:39.636 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df5ba7ad-87a2-49a5-8b32-82890f89c49b 00:09:39.636 15:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:40.583 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:40.583 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:40.583 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u df5ba7ad-87a2-49a5-8b32-82890f89c49b lvol 150 00:09:40.849 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=1c622170-e59d-43e9-a827-1821a3830ee3 00:09:40.849 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:40.849 15:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:41.418 [2024-10-28 15:05:28.047665] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:41.418 [2024-10-28 15:05:28.047824] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:41.418 true 00:09:41.418 15:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df5ba7ad-87a2-49a5-8b32-82890f89c49b 00:09:41.418 15:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:41.677 15:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:41.677 15:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:42.247 15:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1c622170-e59d-43e9-a827-1821a3830ee3 00:09:43.186 15:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:43.186 [2024-10-28 15:05:29.990612] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:43.186 15:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:43.755 15:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3077418 00:09:43.756 15:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:43.756 15:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:43.756 15:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3077418 /var/tmp/bdevperf.sock 00:09:43.756 15:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 3077418 ']' 00:09:43.756 15:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:43.756 15:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:43.756 15:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:43.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:43.756 15:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:43.756 15:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:43.756 [2024-10-28 15:05:30.479021] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:09:43.756 [2024-10-28 15:05:30.479136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3077418 ] 00:09:43.756 [2024-10-28 15:05:30.595990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.015 [2024-10-28 15:05:30.701742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.273 15:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:44.273 15:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:09:44.273 15:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:44.842 Nvme0n1 00:09:44.842 15:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:45.782 [ 00:09:45.782 { 00:09:45.782 "name": "Nvme0n1", 00:09:45.782 "aliases": [ 00:09:45.782 "1c622170-e59d-43e9-a827-1821a3830ee3" 00:09:45.782 ], 00:09:45.782 "product_name": "NVMe disk", 00:09:45.782 "block_size": 4096, 00:09:45.782 "num_blocks": 38912, 00:09:45.782 "uuid": "1c622170-e59d-43e9-a827-1821a3830ee3", 00:09:45.782 "numa_id": 1, 00:09:45.782 "assigned_rate_limits": { 00:09:45.782 "rw_ios_per_sec": 0, 00:09:45.782 "rw_mbytes_per_sec": 0, 00:09:45.782 "r_mbytes_per_sec": 0, 00:09:45.782 "w_mbytes_per_sec": 0 00:09:45.782 }, 00:09:45.782 "claimed": false, 00:09:45.782 "zoned": false, 00:09:45.782 "supported_io_types": { 00:09:45.782 "read": true, 00:09:45.782 "write": true, 00:09:45.782 "unmap": true, 00:09:45.782 "flush": true, 00:09:45.782 "reset": true, 00:09:45.782 "nvme_admin": true, 00:09:45.782 "nvme_io": true, 00:09:45.782 "nvme_io_md": false, 00:09:45.782 "write_zeroes": true, 00:09:45.782 "zcopy": false, 00:09:45.782 "get_zone_info": false, 00:09:45.782 "zone_management": false, 00:09:45.782 "zone_append": false, 00:09:45.782 "compare": true, 00:09:45.782 "compare_and_write": true, 00:09:45.782 "abort": true, 00:09:45.782 "seek_hole": false, 00:09:45.782 "seek_data": false, 00:09:45.782 "copy": true, 00:09:45.782 "nvme_iov_md": false 00:09:45.782 }, 00:09:45.782 "memory_domains": [ 00:09:45.782 { 00:09:45.782 "dma_device_id": "system", 00:09:45.782 "dma_device_type": 1 00:09:45.782 } 00:09:45.782 ], 00:09:45.782 "driver_specific": { 00:09:45.782 "nvme": [ 00:09:45.782 { 00:09:45.782 "trid": { 00:09:45.782 "trtype": "TCP", 00:09:45.782 "adrfam": "IPv4", 00:09:45.782 "traddr": "10.0.0.2", 00:09:45.782 "trsvcid": "4420", 00:09:45.782 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:45.782 }, 00:09:45.782 "ctrlr_data": { 00:09:45.782 "cntlid": 1, 00:09:45.782 "vendor_id": "0x8086", 00:09:45.782 "model_number": "SPDK bdev Controller", 00:09:45.782 "serial_number": "SPDK0", 00:09:45.782 "firmware_revision": "25.01", 00:09:45.782 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:45.782 "oacs": { 00:09:45.782 "security": 0, 00:09:45.782 "format": 0, 00:09:45.782 "firmware": 0, 00:09:45.782 "ns_manage": 0 00:09:45.782 }, 00:09:45.782 "multi_ctrlr": true, 00:09:45.782 "ana_reporting": false 00:09:45.782 }, 00:09:45.782 "vs": { 00:09:45.782 "nvme_version": "1.3" 00:09:45.782 }, 00:09:45.782 "ns_data": { 00:09:45.782 "id": 1, 00:09:45.782 "can_share": true 00:09:45.782 } 00:09:45.782 } 00:09:45.782 ], 00:09:45.782 "mp_policy": "active_passive" 00:09:45.782 } 00:09:45.782 } 00:09:45.782 ] 00:09:45.782 15:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3077562 00:09:45.782 15:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:45.782 15:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:45.782 Running I/O for 10 seconds... 00:09:46.724 Latency(us) 00:09:46.724 [2024-10-28T14:05:33.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.724 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:46.724 Nvme0n1 : 1.00 6415.00 25.06 0.00 0.00 0.00 0.00 0.00 00:09:46.724 [2024-10-28T14:05:33.591Z] =================================================================================================================== 00:09:46.724 [2024-10-28T14:05:33.591Z] Total : 6415.00 25.06 0.00 0.00 0.00 0.00 0.00 00:09:46.724 00:09:47.665 15:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u df5ba7ad-87a2-49a5-8b32-82890f89c49b 00:09:47.665 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:47.665 Nvme0n1 : 2.00 6414.00 25.05 0.00 0.00 0.00 0.00 0.00 00:09:47.665 [2024-10-28T14:05:34.532Z] =================================================================================================================== 00:09:47.665 [2024-10-28T14:05:34.532Z] Total : 6414.00 25.05 0.00 0.00 0.00 0.00 0.00 00:09:47.665 00:09:48.235 true 00:09:48.235 15:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:48.236 15:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df5ba7ad-87a2-49a5-8b32-82890f89c49b 00:09:48.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.806 Nvme0n1 : 3.00 6435.00 25.14 0.00 0.00 0.00 0.00 0.00 00:09:48.806 [2024-10-28T14:05:35.673Z] =================================================================================================================== 00:09:48.806 [2024-10-28T14:05:35.673Z] Total : 6435.00 25.14 0.00 0.00 0.00 0.00 0.00 00:09:48.806 00:09:49.094 15:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:49.094 15:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:49.094 15:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3077562 00:09:49.664 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:49.664 Nvme0n1 : 4.00 6445.50 25.18 0.00 0.00 0.00 0.00 0.00 00:09:49.664 [2024-10-28T14:05:36.531Z] =================================================================================================================== 00:09:49.664 [2024-10-28T14:05:36.531Z] Total : 6445.50 25.18 0.00 0.00 0.00 0.00 0.00 00:09:49.664 00:09:50.606 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:50.606 Nvme0n1 : 5.00 6451.80 25.20 0.00 0.00 0.00 0.00 0.00 00:09:50.606 [2024-10-28T14:05:37.473Z] =================================================================================================================== 00:09:50.606 [2024-10-28T14:05:37.473Z] Total : 6451.80 25.20 0.00 0.00 0.00 0.00 0.00 00:09:50.606 00:09:51.990 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:51.990 Nvme0n1 : 6.00 6498.33 25.38 0.00 0.00 0.00 0.00 0.00 00:09:51.990 [2024-10-28T14:05:38.857Z] =================================================================================================================== 00:09:51.990 [2024-10-28T14:05:38.857Z] Total : 6498.33 25.38 0.00 0.00 0.00 0.00 0.00 00:09:51.990 00:09:52.930 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.930 Nvme0n1 : 7.00 6513.43 25.44 0.00 0.00 0.00 0.00 0.00 00:09:52.930 [2024-10-28T14:05:39.797Z] =================================================================================================================== 00:09:52.930 [2024-10-28T14:05:39.797Z] Total : 6513.43 25.44 0.00 0.00 0.00 0.00 0.00 00:09:52.930 00:09:53.871 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:53.871 Nvme0n1 : 8.00 6556.62 25.61 0.00 0.00 0.00 0.00 0.00 00:09:53.871 [2024-10-28T14:05:40.738Z] =================================================================================================================== 00:09:53.871 [2024-10-28T14:05:40.738Z] Total : 6556.62 25.61 0.00 0.00 0.00 0.00 0.00 00:09:53.871 00:09:54.813 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.813 Nvme0n1 : 9.00 6561.89 25.63 0.00 0.00 0.00 0.00 0.00 00:09:54.813 [2024-10-28T14:05:41.680Z] =================================================================================================================== 00:09:54.813 [2024-10-28T14:05:41.680Z] Total : 6561.89 25.63 0.00 0.00 0.00 0.00 0.00 00:09:54.813 00:09:55.752 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.752 Nvme0n1 : 10.00 6775.80 26.47 0.00 0.00 0.00 0.00 0.00 00:09:55.752 [2024-10-28T14:05:42.619Z] =================================================================================================================== 00:09:55.752 [2024-10-28T14:05:42.619Z] Total : 6775.80 26.47 0.00 0.00 0.00 0.00 0.00 00:09:55.752 00:09:55.752 00:09:55.752 Latency(us) 00:09:55.752 [2024-10-28T14:05:42.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.752 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.752 Nvme0n1 : 10.02 6778.30 26.48 0.00 0.00 18871.09 4878.79 37476.88 00:09:55.752 [2024-10-28T14:05:42.619Z] =================================================================================================================== 00:09:55.752 [2024-10-28T14:05:42.619Z] Total : 6778.30 26.48 0.00 0.00 18871.09 4878.79 37476.88 00:09:55.752 { 00:09:55.752 "results": [ 00:09:55.752 { 00:09:55.752 "job": "Nvme0n1", 00:09:55.752 "core_mask": "0x2", 00:09:55.752 "workload": "randwrite", 00:09:55.752 "status": "finished", 00:09:55.752 "queue_depth": 128, 00:09:55.752 "io_size": 4096, 00:09:55.752 "runtime": 10.015191, 00:09:55.752 "iops": 6778.303079791489, 00:09:55.752 "mibps": 26.477746405435504, 00:09:55.752 "io_failed": 0, 00:09:55.752 "io_timeout": 0, 00:09:55.752 "avg_latency_us": 18871.087985631686, 00:09:55.752 "min_latency_us": 4878.791111111111, 00:09:55.752 "max_latency_us": 37476.88296296296 00:09:55.752 } 00:09:55.752 ], 00:09:55.752 "core_count": 1 00:09:55.752 } 00:09:55.752 15:05:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3077418 00:09:55.752 15:05:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 3077418 ']' 00:09:55.752 15:05:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 3077418 00:09:55.752 15:05:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:55.752 15:05:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:55.752 15:05:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3077418 00:09:55.752 15:05:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:55.752 15:05:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:55.752 15:05:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3077418' 00:09:55.752 killing process with pid 3077418 00:09:55.752 15:05:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 3077418 00:09:55.752 Received shutdown signal, test time was about 10.000000 seconds 00:09:55.752 00:09:55.752 Latency(us) 00:09:55.752 [2024-10-28T14:05:42.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.752 [2024-10-28T14:05:42.619Z] =================================================================================================================== 00:09:55.752 [2024-10-28T14:05:42.619Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:55.752 15:05:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 3077418 00:09:56.373 15:05:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:56.942 15:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:57.513 15:05:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df5ba7ad-87a2-49a5-8b32-82890f89c49b 00:09:57.513 15:05:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:57.772 15:05:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:57.772 15:05:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:57.772 15:05:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:58.707 [2024-10-28 15:05:45.290580] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:58.707 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df5ba7ad-87a2-49a5-8b32-82890f89c49b 00:09:58.707 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:58.707 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df5ba7ad-87a2-49a5-8b32-82890f89c49b 00:09:58.707 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:58.707 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:58.707 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:58.707 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:58.707 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:58.707 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:58.707 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:58.707 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:58.707 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df5ba7ad-87a2-49a5-8b32-82890f89c49b 00:09:59.315 request: 00:09:59.315 { 00:09:59.315 "uuid": "df5ba7ad-87a2-49a5-8b32-82890f89c49b", 00:09:59.315 "method": "bdev_lvol_get_lvstores", 00:09:59.315 "req_id": 1 00:09:59.315 } 00:09:59.315 Got JSON-RPC error response 00:09:59.315 response: 00:09:59.315 { 00:09:59.315 "code": -19, 00:09:59.315 "message": "No such device" 00:09:59.315 } 00:09:59.315 15:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:59.315 15:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:59.315 15:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:59.315 15:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:59.315 15:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:59.883 aio_bdev 00:09:59.883 15:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1c622170-e59d-43e9-a827-1821a3830ee3 00:09:59.883 15:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=1c622170-e59d-43e9-a827-1821a3830ee3 00:09:59.883 15:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:59.883 15:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:59.883 15:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:59.883 15:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:59.883 15:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:00.143 15:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1c622170-e59d-43e9-a827-1821a3830ee3 -t 2000 00:10:01.079 [ 00:10:01.079 { 00:10:01.079 "name": "1c622170-e59d-43e9-a827-1821a3830ee3", 00:10:01.079 "aliases": [ 00:10:01.079 "lvs/lvol" 00:10:01.079 ], 00:10:01.079 "product_name": "Logical Volume", 00:10:01.079 "block_size": 4096, 00:10:01.079 "num_blocks": 38912, 00:10:01.079 "uuid": "1c622170-e59d-43e9-a827-1821a3830ee3", 00:10:01.079 "assigned_rate_limits": { 00:10:01.079 "rw_ios_per_sec": 0, 00:10:01.079 "rw_mbytes_per_sec": 0, 00:10:01.079 "r_mbytes_per_sec": 0, 00:10:01.079 "w_mbytes_per_sec": 0 00:10:01.079 }, 00:10:01.079 "claimed": false, 00:10:01.079 "zoned": false, 00:10:01.079 "supported_io_types": { 00:10:01.079 "read": true, 00:10:01.079 "write": true, 00:10:01.079 "unmap": true, 00:10:01.079 "flush": false, 00:10:01.079 "reset": true, 00:10:01.079 "nvme_admin": false, 00:10:01.079 "nvme_io": false, 00:10:01.079 "nvme_io_md": false, 00:10:01.079 "write_zeroes": true, 00:10:01.079 "zcopy": false, 00:10:01.079 "get_zone_info": false, 00:10:01.079 "zone_management": false, 00:10:01.079 "zone_append": false, 00:10:01.079 "compare": false, 00:10:01.079 "compare_and_write": false, 00:10:01.079 "abort": false, 00:10:01.079 "seek_hole": true, 00:10:01.079 "seek_data": true, 00:10:01.079 "copy": false, 00:10:01.079 "nvme_iov_md": false 00:10:01.079 }, 00:10:01.079 "driver_specific": { 00:10:01.079 "lvol": { 00:10:01.079 "lvol_store_uuid": "df5ba7ad-87a2-49a5-8b32-82890f89c49b", 00:10:01.079 "base_bdev": "aio_bdev", 00:10:01.079 "thin_provision": false, 00:10:01.079 "num_allocated_clusters": 38, 00:10:01.079 "snapshot": false, 00:10:01.079 "clone": false, 00:10:01.079 "esnap_clone": false 00:10:01.079 } 00:10:01.079 } 00:10:01.079 } 00:10:01.079 ] 00:10:01.079 15:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:10:01.079 15:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:01.079 15:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df5ba7ad-87a2-49a5-8b32-82890f89c49b 00:10:01.337 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:01.337 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df5ba7ad-87a2-49a5-8b32-82890f89c49b 00:10:01.337 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:01.597 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:01.597 15:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1c622170-e59d-43e9-a827-1821a3830ee3 00:10:02.166 15:05:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u df5ba7ad-87a2-49a5-8b32-82890f89c49b 00:10:03.106 15:05:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:03.675 15:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:03.675 00:10:03.675 real 0m25.054s 00:10:03.675 user 0m25.244s 00:10:03.675 sys 0m2.864s 00:10:03.675 15:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:03.675 15:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:03.675 ************************************ 00:10:03.675 END TEST lvs_grow_clean 00:10:03.675 ************************************ 00:10:03.675 15:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:03.675 15:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:03.675 15:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:03.675 15:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:03.675 ************************************ 00:10:03.675 START TEST lvs_grow_dirty 00:10:03.675 ************************************ 00:10:03.675 15:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:10:03.675 15:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:03.675 15:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:03.675 15:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:03.675 15:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:03.675 15:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:03.675 15:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:03.675 15:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:03.675 15:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:03.675 15:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:04.615 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:04.615 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:05.186 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=182d8286-9989-4c49-908a-006bb08b83b2 00:10:05.186 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 182d8286-9989-4c49-908a-006bb08b83b2 00:10:05.186 15:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:05.757 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:05.757 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:05.757 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 182d8286-9989-4c49-908a-006bb08b83b2 lvol 150 00:10:06.326 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=58b1b04f-3267-4c15-8f1a-f3d9b0bab356 00:10:06.326 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:06.326 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:06.894 [2024-10-28 15:05:53.538631] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:06.894 [2024-10-28 15:05:53.538822] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:06.894 true 00:10:06.894 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:06.894 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 182d8286-9989-4c49-908a-006bb08b83b2 00:10:07.526 15:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:07.526 15:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:08.097 15:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 58b1b04f-3267-4c15-8f1a-f3d9b0bab356 00:10:09.037 15:05:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:09.607 [2024-10-28 15:05:56.268000] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:09.607 15:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:10.176 15:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3080404 00:10:10.176 15:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:10.176 15:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:10.176 15:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3080404 /var/tmp/bdevperf.sock 00:10:10.176 15:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3080404 ']' 00:10:10.176 15:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:10.176 15:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:10.176 15:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:10.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:10.176 15:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:10.176 15:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:10.437 [2024-10-28 15:05:57.059896] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:10:10.437 [2024-10-28 15:05:57.059978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3080404 ] 00:10:10.437 [2024-10-28 15:05:57.198677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.697 [2024-10-28 15:05:57.318383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:10.697 15:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:10.697 15:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:10:10.697 15:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:11.638 Nvme0n1 00:10:11.639 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:12.209 [ 00:10:12.209 { 00:10:12.209 "name": "Nvme0n1", 00:10:12.209 "aliases": [ 00:10:12.209 "58b1b04f-3267-4c15-8f1a-f3d9b0bab356" 00:10:12.209 ], 00:10:12.209 "product_name": "NVMe disk", 00:10:12.209 "block_size": 4096, 00:10:12.209 "num_blocks": 38912, 00:10:12.209 "uuid": "58b1b04f-3267-4c15-8f1a-f3d9b0bab356", 00:10:12.209 "numa_id": 1, 00:10:12.209 "assigned_rate_limits": { 00:10:12.209 "rw_ios_per_sec": 0, 00:10:12.209 "rw_mbytes_per_sec": 0, 00:10:12.209 "r_mbytes_per_sec": 0, 00:10:12.209 "w_mbytes_per_sec": 0 00:10:12.209 }, 00:10:12.209 "claimed": false, 00:10:12.209 "zoned": false, 00:10:12.209 "supported_io_types": { 00:10:12.209 "read": true, 00:10:12.209 "write": true, 00:10:12.209 "unmap": true, 00:10:12.209 "flush": true, 00:10:12.209 "reset": true, 00:10:12.209 "nvme_admin": true, 00:10:12.209 "nvme_io": true, 00:10:12.209 "nvme_io_md": false, 00:10:12.209 "write_zeroes": true, 00:10:12.209 "zcopy": false, 00:10:12.209 "get_zone_info": false, 00:10:12.209 "zone_management": false, 00:10:12.209 "zone_append": false, 00:10:12.209 "compare": true, 00:10:12.209 "compare_and_write": true, 00:10:12.209 "abort": true, 00:10:12.209 "seek_hole": false, 00:10:12.209 "seek_data": false, 00:10:12.209 "copy": true, 00:10:12.209 "nvme_iov_md": false 00:10:12.209 }, 00:10:12.209 "memory_domains": [ 00:10:12.209 { 00:10:12.209 "dma_device_id": "system", 00:10:12.209 "dma_device_type": 1 00:10:12.209 } 00:10:12.209 ], 00:10:12.209 "driver_specific": { 00:10:12.209 "nvme": [ 00:10:12.209 { 00:10:12.209 "trid": { 00:10:12.209 "trtype": "TCP", 00:10:12.209 "adrfam": "IPv4", 00:10:12.209 "traddr": "10.0.0.2", 00:10:12.209 "trsvcid": "4420", 00:10:12.209 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:12.209 }, 00:10:12.209 "ctrlr_data": { 00:10:12.209 "cntlid": 1, 00:10:12.209 "vendor_id": "0x8086", 00:10:12.209 "model_number": "SPDK bdev Controller", 00:10:12.209 "serial_number": "SPDK0", 00:10:12.209 "firmware_revision": "25.01", 00:10:12.209 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:12.209 "oacs": { 00:10:12.209 "security": 0, 00:10:12.209 "format": 0, 00:10:12.209 "firmware": 0, 00:10:12.209 "ns_manage": 0 00:10:12.209 }, 00:10:12.209 "multi_ctrlr": true, 00:10:12.209 "ana_reporting": false 00:10:12.209 }, 00:10:12.209 "vs": { 00:10:12.209 "nvme_version": "1.3" 00:10:12.209 }, 00:10:12.209 "ns_data": { 00:10:12.209 "id": 1, 00:10:12.209 "can_share": true 00:10:12.209 } 00:10:12.209 } 00:10:12.209 ], 00:10:12.209 "mp_policy": "active_passive" 00:10:12.209 } 00:10:12.209 } 00:10:12.209 ] 00:10:12.209 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3080662 00:10:12.209 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:12.209 15:05:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:12.469 Running I/O for 10 seconds... 00:10:13.406 Latency(us) 00:10:13.406 [2024-10-28T14:06:00.273Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:13.406 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:13.406 Nvme0n1 : 1.00 6351.00 24.81 0.00 0.00 0.00 0.00 0.00 00:10:13.406 [2024-10-28T14:06:00.273Z] =================================================================================================================== 00:10:13.406 [2024-10-28T14:06:00.273Z] Total : 6351.00 24.81 0.00 0.00 0.00 0.00 0.00 00:10:13.406 00:10:14.348 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 182d8286-9989-4c49-908a-006bb08b83b2 00:10:14.348 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:14.348 Nvme0n1 : 2.00 6858.50 26.79 0.00 0.00 0.00 0.00 0.00 00:10:14.348 [2024-10-28T14:06:01.215Z] =================================================================================================================== 00:10:14.348 [2024-10-28T14:06:01.215Z] Total : 6858.50 26.79 0.00 0.00 0.00 0.00 0.00 00:10:14.348 00:10:14.608 true 00:10:14.608 15:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 182d8286-9989-4c49-908a-006bb08b83b2 00:10:14.608 15:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:15.179 15:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:15.179 15:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:15.179 15:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3080662 00:10:15.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:15.439 Nvme0n1 : 3.00 6900.67 26.96 0.00 0.00 0.00 0.00 0.00 00:10:15.439 [2024-10-28T14:06:02.306Z] =================================================================================================================== 00:10:15.439 [2024-10-28T14:06:02.306Z] Total : 6900.67 26.96 0.00 0.00 0.00 0.00 0.00 00:10:15.439 00:10:16.381 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:16.381 Nvme0n1 : 4.00 6858.25 26.79 0.00 0.00 0.00 0.00 0.00 00:10:16.381 [2024-10-28T14:06:03.248Z] =================================================================================================================== 00:10:16.381 [2024-10-28T14:06:03.248Z] Total : 6858.25 26.79 0.00 0.00 0.00 0.00 0.00 00:10:16.381 00:10:17.323 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:17.323 Nvme0n1 : 5.00 6807.40 26.59 0.00 0.00 0.00 0.00 0.00 00:10:17.323 [2024-10-28T14:06:04.190Z] =================================================================================================================== 00:10:17.323 [2024-10-28T14:06:04.190Z] Total : 6807.40 26.59 0.00 0.00 0.00 0.00 0.00 00:10:17.323 00:10:18.708 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:18.708 Nvme0n1 : 6.00 6752.33 26.38 0.00 0.00 0.00 0.00 0.00 00:10:18.708 [2024-10-28T14:06:05.575Z] =================================================================================================================== 00:10:18.708 [2024-10-28T14:06:05.575Z] Total : 6752.33 26.38 0.00 0.00 0.00 0.00 0.00 00:10:18.708 00:10:19.649 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:19.649 Nvme0n1 : 7.00 6717.86 26.24 0.00 0.00 0.00 0.00 0.00 00:10:19.649 [2024-10-28T14:06:06.516Z] =================================================================================================================== 00:10:19.649 [2024-10-28T14:06:06.516Z] Total : 6717.86 26.24 0.00 0.00 0.00 0.00 0.00 00:10:19.649 00:10:20.586 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:20.586 Nvme0n1 : 8.00 6791.12 26.53 0.00 0.00 0.00 0.00 0.00 00:10:20.586 [2024-10-28T14:06:07.453Z] =================================================================================================================== 00:10:20.586 [2024-10-28T14:06:07.454Z] Total : 6791.12 26.53 0.00 0.00 0.00 0.00 0.00 00:10:20.587 00:10:21.527 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:21.527 Nvme0n1 : 9.00 6770.33 26.45 0.00 0.00 0.00 0.00 0.00 00:10:21.527 [2024-10-28T14:06:08.394Z] =================================================================================================================== 00:10:21.527 [2024-10-28T14:06:08.394Z] Total : 6770.33 26.45 0.00 0.00 0.00 0.00 0.00 00:10:21.527 00:10:22.467 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:22.467 Nvme0n1 : 10.00 6791.80 26.53 0.00 0.00 0.00 0.00 0.00 00:10:22.467 [2024-10-28T14:06:09.334Z] =================================================================================================================== 00:10:22.467 [2024-10-28T14:06:09.334Z] Total : 6791.80 26.53 0.00 0.00 0.00 0.00 0.00 00:10:22.467 00:10:22.467 00:10:22.467 Latency(us) 00:10:22.467 [2024-10-28T14:06:09.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.467 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:22.467 Nvme0n1 : 10.02 6793.25 26.54 0.00 0.00 18829.50 6796.33 40001.23 00:10:22.467 [2024-10-28T14:06:09.334Z] =================================================================================================================== 00:10:22.467 [2024-10-28T14:06:09.334Z] Total : 6793.25 26.54 0.00 0.00 18829.50 6796.33 40001.23 00:10:22.467 { 00:10:22.467 "results": [ 00:10:22.467 { 00:10:22.467 "job": "Nvme0n1", 00:10:22.467 "core_mask": "0x2", 00:10:22.467 "workload": "randwrite", 00:10:22.467 "status": "finished", 00:10:22.467 "queue_depth": 128, 00:10:22.467 "io_size": 4096, 00:10:22.467 "runtime": 10.016705, 00:10:22.467 "iops": 6793.251872746577, 00:10:22.467 "mibps": 26.536140127916315, 00:10:22.467 "io_failed": 0, 00:10:22.467 "io_timeout": 0, 00:10:22.467 "avg_latency_us": 18829.497769373877, 00:10:22.467 "min_latency_us": 6796.325925925926, 00:10:22.467 "max_latency_us": 40001.23259259259 00:10:22.467 } 00:10:22.467 ], 00:10:22.467 "core_count": 1 00:10:22.467 } 00:10:22.467 15:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3080404 00:10:22.467 15:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 3080404 ']' 00:10:22.467 15:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 3080404 00:10:22.467 15:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:10:22.467 15:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:22.467 15:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3080404 00:10:22.467 15:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:22.467 15:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:22.467 15:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3080404' 00:10:22.467 killing process with pid 3080404 00:10:22.467 15:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 3080404 00:10:22.467 Received shutdown signal, test time was about 10.000000 seconds 00:10:22.467 00:10:22.467 Latency(us) 00:10:22.467 [2024-10-28T14:06:09.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.467 [2024-10-28T14:06:09.334Z] =================================================================================================================== 00:10:22.467 [2024-10-28T14:06:09.334Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:22.467 15:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 3080404 00:10:22.726 15:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:23.295 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:23.867 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 182d8286-9989-4c49-908a-006bb08b83b2 00:10:23.867 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:24.436 15:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:24.436 15:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:24.436 15:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3076587 00:10:24.436 15:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3076587 00:10:24.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3076587 Killed "${NVMF_APP[@]}" "$@" 00:10:24.436 15:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:24.436 15:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:24.436 15:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:24.436 15:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:24.436 15:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:24.436 15:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3082717 00:10:24.436 15:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:24.436 15:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3082717 00:10:24.436 15:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3082717 ']' 00:10:24.436 15:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.436 15:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:24.436 15:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.436 15:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:24.436 15:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:24.436 [2024-10-28 15:06:11.136926] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:10:24.436 [2024-10-28 15:06:11.137011] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.436 [2024-10-28 15:06:11.272667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.697 [2024-10-28 15:06:11.387380] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:24.697 [2024-10-28 15:06:11.387478] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:24.697 [2024-10-28 15:06:11.387514] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:24.697 [2024-10-28 15:06:11.387543] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:24.697 [2024-10-28 15:06:11.387569] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:24.697 [2024-10-28 15:06:11.388898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.637 15:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:25.637 15:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:10:25.637 15:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:25.637 15:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:25.637 15:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:25.637 15:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:25.637 15:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:25.897 [2024-10-28 15:06:12.665345] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:25.897 [2024-10-28 15:06:12.665683] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:25.897 [2024-10-28 15:06:12.665817] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:25.897 15:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:25.897 15:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 58b1b04f-3267-4c15-8f1a-f3d9b0bab356 00:10:25.897 15:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=58b1b04f-3267-4c15-8f1a-f3d9b0bab356 00:10:25.897 15:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:25.897 15:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:10:25.897 15:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:25.897 15:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:25.897 15:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:26.838 15:06:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 58b1b04f-3267-4c15-8f1a-f3d9b0bab356 -t 2000 00:10:27.414 [ 00:10:27.414 { 00:10:27.414 "name": "58b1b04f-3267-4c15-8f1a-f3d9b0bab356", 00:10:27.414 "aliases": [ 00:10:27.414 "lvs/lvol" 00:10:27.414 ], 00:10:27.414 "product_name": "Logical Volume", 00:10:27.414 "block_size": 4096, 00:10:27.414 "num_blocks": 38912, 00:10:27.414 "uuid": "58b1b04f-3267-4c15-8f1a-f3d9b0bab356", 00:10:27.414 "assigned_rate_limits": { 00:10:27.414 "rw_ios_per_sec": 0, 00:10:27.414 "rw_mbytes_per_sec": 0, 00:10:27.414 "r_mbytes_per_sec": 0, 00:10:27.414 "w_mbytes_per_sec": 0 00:10:27.414 }, 00:10:27.414 "claimed": false, 00:10:27.414 "zoned": false, 00:10:27.414 "supported_io_types": { 00:10:27.414 "read": true, 00:10:27.414 "write": true, 00:10:27.414 "unmap": true, 00:10:27.414 "flush": false, 00:10:27.414 "reset": true, 00:10:27.414 "nvme_admin": false, 00:10:27.414 "nvme_io": false, 00:10:27.414 "nvme_io_md": false, 00:10:27.414 "write_zeroes": true, 00:10:27.414 "zcopy": false, 00:10:27.414 "get_zone_info": false, 00:10:27.414 "zone_management": false, 00:10:27.414 "zone_append": false, 00:10:27.414 "compare": false, 00:10:27.414 "compare_and_write": false, 00:10:27.414 "abort": false, 00:10:27.414 "seek_hole": true, 00:10:27.414 "seek_data": true, 00:10:27.414 "copy": false, 00:10:27.414 "nvme_iov_md": false 00:10:27.414 }, 00:10:27.414 "driver_specific": { 00:10:27.414 "lvol": { 00:10:27.414 "lvol_store_uuid": "182d8286-9989-4c49-908a-006bb08b83b2", 00:10:27.414 "base_bdev": "aio_bdev", 00:10:27.414 "thin_provision": false, 00:10:27.414 "num_allocated_clusters": 38, 00:10:27.414 "snapshot": false, 00:10:27.414 "clone": false, 00:10:27.414 "esnap_clone": false 00:10:27.414 } 00:10:27.414 } 00:10:27.414 } 00:10:27.414 ] 00:10:27.414 15:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:10:27.414 15:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 182d8286-9989-4c49-908a-006bb08b83b2 00:10:27.414 15:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:28.062 15:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:28.062 15:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 182d8286-9989-4c49-908a-006bb08b83b2 00:10:28.062 15:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:28.320 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:28.320 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:28.888 [2024-10-28 15:06:15.538315] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:28.888 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 182d8286-9989-4c49-908a-006bb08b83b2 00:10:28.888 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:10:28.888 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 182d8286-9989-4c49-908a-006bb08b83b2 00:10:28.888 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:28.888 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:28.888 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:28.888 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:28.888 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:28.888 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:28.888 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:28.888 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:28.888 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 182d8286-9989-4c49-908a-006bb08b83b2 00:10:29.148 request: 00:10:29.148 { 00:10:29.148 "uuid": "182d8286-9989-4c49-908a-006bb08b83b2", 00:10:29.148 "method": "bdev_lvol_get_lvstores", 00:10:29.148 "req_id": 1 00:10:29.148 } 00:10:29.148 Got JSON-RPC error response 00:10:29.148 response: 00:10:29.148 { 00:10:29.148 "code": -19, 00:10:29.148 "message": "No such device" 00:10:29.148 } 00:10:29.148 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:10:29.148 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:29.148 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:29.148 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:29.148 15:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:29.718 aio_bdev 00:10:29.718 15:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 58b1b04f-3267-4c15-8f1a-f3d9b0bab356 00:10:29.718 15:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=58b1b04f-3267-4c15-8f1a-f3d9b0bab356 00:10:29.718 15:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:29.718 15:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:10:29.718 15:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:29.718 15:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:29.718 15:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:30.288 15:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 58b1b04f-3267-4c15-8f1a-f3d9b0bab356 -t 2000 00:10:30.857 [ 00:10:30.857 { 00:10:30.857 "name": "58b1b04f-3267-4c15-8f1a-f3d9b0bab356", 00:10:30.857 "aliases": [ 00:10:30.857 "lvs/lvol" 00:10:30.857 ], 00:10:30.857 "product_name": "Logical Volume", 00:10:30.857 "block_size": 4096, 00:10:30.857 "num_blocks": 38912, 00:10:30.857 "uuid": "58b1b04f-3267-4c15-8f1a-f3d9b0bab356", 00:10:30.857 "assigned_rate_limits": { 00:10:30.857 "rw_ios_per_sec": 0, 00:10:30.857 "rw_mbytes_per_sec": 0, 00:10:30.857 "r_mbytes_per_sec": 0, 00:10:30.857 "w_mbytes_per_sec": 0 00:10:30.857 }, 00:10:30.857 "claimed": false, 00:10:30.857 "zoned": false, 00:10:30.857 "supported_io_types": { 00:10:30.857 "read": true, 00:10:30.857 "write": true, 00:10:30.857 "unmap": true, 00:10:30.857 "flush": false, 00:10:30.857 "reset": true, 00:10:30.857 "nvme_admin": false, 00:10:30.857 "nvme_io": false, 00:10:30.857 "nvme_io_md": false, 00:10:30.857 "write_zeroes": true, 00:10:30.857 "zcopy": false, 00:10:30.857 "get_zone_info": false, 00:10:30.857 "zone_management": false, 00:10:30.857 "zone_append": false, 00:10:30.857 "compare": false, 00:10:30.857 "compare_and_write": false, 00:10:30.857 "abort": false, 00:10:30.857 "seek_hole": true, 00:10:30.857 "seek_data": true, 00:10:30.857 "copy": false, 00:10:30.857 "nvme_iov_md": false 00:10:30.857 }, 00:10:30.857 "driver_specific": { 00:10:30.857 "lvol": { 00:10:30.857 "lvol_store_uuid": "182d8286-9989-4c49-908a-006bb08b83b2", 00:10:30.857 "base_bdev": "aio_bdev", 00:10:30.857 "thin_provision": false, 00:10:30.857 "num_allocated_clusters": 38, 00:10:30.857 "snapshot": false, 00:10:30.857 "clone": false, 00:10:30.857 "esnap_clone": false 00:10:30.857 } 00:10:30.857 } 00:10:30.857 } 00:10:30.857 ] 00:10:30.857 15:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:10:30.857 15:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 182d8286-9989-4c49-908a-006bb08b83b2 00:10:30.857 15:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:31.117 15:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:31.117 15:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 182d8286-9989-4c49-908a-006bb08b83b2 00:10:31.117 15:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:32.056 15:06:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:32.056 15:06:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 58b1b04f-3267-4c15-8f1a-f3d9b0bab356 00:10:32.315 15:06:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 182d8286-9989-4c49-908a-006bb08b83b2 00:10:32.892 15:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:33.459 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:33.459 00:10:33.459 real 0m29.708s 00:10:33.459 user 1m11.866s 00:10:33.459 sys 0m6.187s 00:10:33.459 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:33.459 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:33.459 ************************************ 00:10:33.459 END TEST lvs_grow_dirty 00:10:33.459 ************************************ 00:10:33.459 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:33.459 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:10:33.459 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:10:33.459 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:10:33.459 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:33.459 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:10:33.459 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:10:33.459 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:10:33.459 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:33.459 nvmf_trace.0 00:10:33.459 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:10:33.459 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:33.459 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:33.459 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:33.459 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:33.459 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:33.459 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:33.459 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:33.459 rmmod nvme_tcp 00:10:33.459 rmmod nvme_fabrics 00:10:33.459 rmmod nvme_keyring 00:10:33.459 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:33.459 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:33.459 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:33.459 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3082717 ']' 00:10:33.459 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3082717 00:10:33.459 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 3082717 ']' 00:10:33.459 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 3082717 00:10:33.459 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:10:33.459 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:33.459 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3082717 00:10:33.718 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:33.718 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:33.718 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3082717' 00:10:33.718 killing process with pid 3082717 00:10:33.718 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 3082717 00:10:33.718 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 3082717 00:10:33.978 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:33.978 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:33.978 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:33.978 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:10:33.978 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:10:33.978 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:33.978 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:10:33.978 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:33.978 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:33.978 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.978 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:33.978 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.880 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:35.880 00:10:35.880 real 1m2.190s 00:10:35.880 user 1m48.514s 00:10:35.880 sys 0m12.141s 00:10:35.880 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.880 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:35.880 ************************************ 00:10:35.880 END TEST nvmf_lvs_grow 00:10:35.880 ************************************ 00:10:35.880 15:06:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:35.880 15:06:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:35.880 15:06:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.880 15:06:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:36.139 ************************************ 00:10:36.139 START TEST nvmf_bdev_io_wait 00:10:36.139 ************************************ 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:36.139 * Looking for test storage... 00:10:36.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1689 -- # lcov --version 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:36.139 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:10:36.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.139 --rc genhtml_branch_coverage=1 00:10:36.139 --rc genhtml_function_coverage=1 00:10:36.139 --rc genhtml_legend=1 00:10:36.139 --rc geninfo_all_blocks=1 00:10:36.139 --rc geninfo_unexecuted_blocks=1 00:10:36.140 00:10:36.140 ' 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:10:36.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.140 --rc genhtml_branch_coverage=1 00:10:36.140 --rc genhtml_function_coverage=1 00:10:36.140 --rc genhtml_legend=1 00:10:36.140 --rc geninfo_all_blocks=1 00:10:36.140 --rc geninfo_unexecuted_blocks=1 00:10:36.140 00:10:36.140 ' 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:10:36.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.140 --rc genhtml_branch_coverage=1 00:10:36.140 --rc genhtml_function_coverage=1 00:10:36.140 --rc genhtml_legend=1 00:10:36.140 --rc geninfo_all_blocks=1 00:10:36.140 --rc geninfo_unexecuted_blocks=1 00:10:36.140 00:10:36.140 ' 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:10:36.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.140 --rc genhtml_branch_coverage=1 00:10:36.140 --rc genhtml_function_coverage=1 00:10:36.140 --rc genhtml_legend=1 00:10:36.140 --rc geninfo_all_blocks=1 00:10:36.140 --rc geninfo_unexecuted_blocks=1 00:10:36.140 00:10:36.140 ' 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:36.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:10:36.140 15:06:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:39.436 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:39.436 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:39.436 Found net devices under 0000:84:00.0: cvl_0_0 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:39.436 Found net devices under 0000:84:00.1: cvl_0_1 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:39.436 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:39.437 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:39.437 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:39.437 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:39.437 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:39.437 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:39.437 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:39.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:39.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:10:39.437 00:10:39.437 --- 10.0.0.2 ping statistics --- 00:10:39.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.437 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:10:39.437 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:39.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:39.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:10:39.437 00:10:39.437 --- 10.0.0.1 ping statistics --- 00:10:39.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.437 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:10:39.437 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:39.437 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:10:39.437 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:39.437 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:39.437 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:39.437 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:39.437 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:39.437 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:39.437 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:39.437 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:39.437 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:39.437 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:39.437 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:39.437 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3085840 00:10:39.437 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:39.437 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3085840 00:10:39.437 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 3085840 ']' 00:10:39.437 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.437 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:39.437 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.437 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:39.437 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:39.437 [2024-10-28 15:06:26.093910] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:10:39.437 [2024-10-28 15:06:26.094015] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.437 [2024-10-28 15:06:26.216377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:39.696 [2024-10-28 15:06:26.326630] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.696 [2024-10-28 15:06:26.326756] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.696 [2024-10-28 15:06:26.326794] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.696 [2024-10-28 15:06:26.326825] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.696 [2024-10-28 15:06:26.326851] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.696 [2024-10-28 15:06:26.330216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.696 [2024-10-28 15:06:26.330316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.696 [2024-10-28 15:06:26.330411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.696 [2024-10-28 15:06:26.330414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.696 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:39.697 [2024-10-28 15:06:26.493102] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:39.697 Malloc0 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:39.697 [2024-10-28 15:06:26.546796] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3085987 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3085989 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:39.697 { 00:10:39.697 "params": { 00:10:39.697 "name": "Nvme$subsystem", 00:10:39.697 "trtype": "$TEST_TRANSPORT", 00:10:39.697 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:39.697 "adrfam": "ipv4", 00:10:39.697 "trsvcid": "$NVMF_PORT", 00:10:39.697 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:39.697 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:39.697 "hdgst": ${hdgst:-false}, 00:10:39.697 "ddgst": ${ddgst:-false} 00:10:39.697 }, 00:10:39.697 "method": "bdev_nvme_attach_controller" 00:10:39.697 } 00:10:39.697 EOF 00:10:39.697 )") 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3085991 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:39.697 { 00:10:39.697 "params": { 00:10:39.697 "name": "Nvme$subsystem", 00:10:39.697 "trtype": "$TEST_TRANSPORT", 00:10:39.697 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:39.697 "adrfam": "ipv4", 00:10:39.697 "trsvcid": "$NVMF_PORT", 00:10:39.697 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:39.697 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:39.697 "hdgst": ${hdgst:-false}, 00:10:39.697 "ddgst": ${ddgst:-false} 00:10:39.697 }, 00:10:39.697 "method": "bdev_nvme_attach_controller" 00:10:39.697 } 00:10:39.697 EOF 00:10:39.697 )") 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3085994 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:39.697 { 00:10:39.697 "params": { 00:10:39.697 "name": "Nvme$subsystem", 00:10:39.697 "trtype": "$TEST_TRANSPORT", 00:10:39.697 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:39.697 "adrfam": "ipv4", 00:10:39.697 "trsvcid": "$NVMF_PORT", 00:10:39.697 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:39.697 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:39.697 "hdgst": ${hdgst:-false}, 00:10:39.697 "ddgst": ${ddgst:-false} 00:10:39.697 }, 00:10:39.697 "method": "bdev_nvme_attach_controller" 00:10:39.697 } 00:10:39.697 EOF 00:10:39.697 )") 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:39.697 { 00:10:39.697 "params": { 00:10:39.697 "name": "Nvme$subsystem", 00:10:39.697 "trtype": "$TEST_TRANSPORT", 00:10:39.697 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:39.697 "adrfam": "ipv4", 00:10:39.697 "trsvcid": "$NVMF_PORT", 00:10:39.697 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:39.697 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:39.697 "hdgst": ${hdgst:-false}, 00:10:39.697 "ddgst": ${ddgst:-false} 00:10:39.697 }, 00:10:39.697 "method": "bdev_nvme_attach_controller" 00:10:39.697 } 00:10:39.697 EOF 00:10:39.697 )") 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3085987 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:39.697 "params": { 00:10:39.697 "name": "Nvme1", 00:10:39.697 "trtype": "tcp", 00:10:39.697 "traddr": "10.0.0.2", 00:10:39.697 "adrfam": "ipv4", 00:10:39.697 "trsvcid": "4420", 00:10:39.697 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:39.697 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:39.697 "hdgst": false, 00:10:39.697 "ddgst": false 00:10:39.697 }, 00:10:39.697 "method": "bdev_nvme_attach_controller" 00:10:39.697 }' 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:39.697 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:39.697 "params": { 00:10:39.697 "name": "Nvme1", 00:10:39.698 "trtype": "tcp", 00:10:39.698 "traddr": "10.0.0.2", 00:10:39.698 "adrfam": "ipv4", 00:10:39.698 "trsvcid": "4420", 00:10:39.698 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:39.698 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:39.698 "hdgst": false, 00:10:39.698 "ddgst": false 00:10:39.698 }, 00:10:39.698 "method": "bdev_nvme_attach_controller" 00:10:39.698 }' 00:10:39.698 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:39.698 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:39.698 "params": { 00:10:39.698 "name": "Nvme1", 00:10:39.698 "trtype": "tcp", 00:10:39.698 "traddr": "10.0.0.2", 00:10:39.698 "adrfam": "ipv4", 00:10:39.698 "trsvcid": "4420", 00:10:39.698 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:39.698 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:39.698 "hdgst": false, 00:10:39.698 "ddgst": false 00:10:39.698 }, 00:10:39.698 "method": "bdev_nvme_attach_controller" 00:10:39.698 }' 00:10:39.698 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:39.698 15:06:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:39.698 "params": { 00:10:39.698 "name": "Nvme1", 00:10:39.698 "trtype": "tcp", 00:10:39.698 "traddr": "10.0.0.2", 00:10:39.698 "adrfam": "ipv4", 00:10:39.698 "trsvcid": "4420", 00:10:39.698 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:39.698 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:39.698 "hdgst": false, 00:10:39.698 "ddgst": false 00:10:39.698 }, 00:10:39.698 "method": "bdev_nvme_attach_controller" 00:10:39.698 }' 00:10:39.955 [2024-10-28 15:06:26.598927] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:10:39.955 [2024-10-28 15:06:26.598928] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:10:39.956 [2024-10-28 15:06:26.599025] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-28 15:06:26.599024] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:39.956 --proc-type=auto ] 00:10:39.956 [2024-10-28 15:06:26.600661] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:10:39.956 [2024-10-28 15:06:26.600661] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:10:39.956 [2024-10-28 15:06:26.600767] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-28 15:06:26.600767] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:39.956 --proc-type=auto ] 00:10:39.956 [2024-10-28 15:06:26.789044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.213 [2024-10-28 15:06:26.846657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:40.213 [2024-10-28 15:06:26.906499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.213 [2024-10-28 15:06:26.964431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:40.213 [2024-10-28 15:06:26.982404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.213 [2024-10-28 15:06:27.033497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:40.213 [2024-10-28 15:06:27.059824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.471 [2024-10-28 15:06:27.113001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:40.471 Running I/O for 1 seconds... 00:10:40.471 Running I/O for 1 seconds... 00:10:40.471 Running I/O for 1 seconds... 00:10:40.730 Running I/O for 1 seconds... 00:10:41.663 6729.00 IOPS, 26.29 MiB/s 00:10:41.663 Latency(us) 00:10:41.663 [2024-10-28T14:06:28.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:41.663 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:41.663 Nvme1n1 : 1.02 6757.04 26.39 0.00 0.00 18733.50 5267.15 29127.11 00:10:41.663 [2024-10-28T14:06:28.530Z] =================================================================================================================== 00:10:41.663 [2024-10-28T14:06:28.530Z] Total : 6757.04 26.39 0.00 0.00 18733.50 5267.15 29127.11 00:10:41.663 8945.00 IOPS, 34.94 MiB/s 00:10:41.663 Latency(us) 00:10:41.663 [2024-10-28T14:06:28.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:41.663 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:41.663 Nvme1n1 : 1.01 8989.42 35.11 0.00 0.00 14167.88 8155.59 24369.68 00:10:41.663 [2024-10-28T14:06:28.530Z] =================================================================================================================== 00:10:41.663 [2024-10-28T14:06:28.530Z] Total : 8989.42 35.11 0.00 0.00 14167.88 8155.59 24369.68 00:10:41.663 6901.00 IOPS, 26.96 MiB/s 00:10:41.663 Latency(us) 00:10:41.663 [2024-10-28T14:06:28.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:41.663 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:41.663 Nvme1n1 : 1.01 7008.50 27.38 0.00 0.00 18214.83 3519.53 43884.85 00:10:41.663 [2024-10-28T14:06:28.530Z] =================================================================================================================== 00:10:41.663 [2024-10-28T14:06:28.530Z] Total : 7008.50 27.38 0.00 0.00 18214.83 3519.53 43884.85 00:10:41.663 198912.00 IOPS, 777.00 MiB/s [2024-10-28T14:06:28.530Z] 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3085989 00:10:41.663 00:10:41.663 Latency(us) 00:10:41.663 [2024-10-28T14:06:28.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:41.663 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:41.663 Nvme1n1 : 1.00 198537.81 775.54 0.00 0.00 641.31 295.82 1881.13 00:10:41.663 [2024-10-28T14:06:28.530Z] =================================================================================================================== 00:10:41.663 [2024-10-28T14:06:28.530Z] Total : 198537.81 775.54 0.00 0.00 641.31 295.82 1881.13 00:10:41.663 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3085991 00:10:41.922 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3085994 00:10:41.922 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:41.922 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.922 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:41.923 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.923 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:41.923 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:41.923 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:41.923 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:41.923 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:41.923 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:41.923 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:41.923 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:41.923 rmmod nvme_tcp 00:10:41.923 rmmod nvme_fabrics 00:10:41.923 rmmod nvme_keyring 00:10:41.923 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.923 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:41.923 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:41.923 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3085840 ']' 00:10:41.923 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3085840 00:10:41.923 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 3085840 ']' 00:10:41.923 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 3085840 00:10:41.923 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:10:41.923 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:41.923 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3085840 00:10:41.923 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:41.923 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:41.923 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3085840' 00:10:41.923 killing process with pid 3085840 00:10:41.923 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 3085840 00:10:41.923 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 3085840 00:10:42.183 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:42.183 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:42.183 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:42.183 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:42.183 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:10:42.183 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:42.183 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:10:42.183 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:42.183 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:42.183 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.183 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.183 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:44.727 00:10:44.727 real 0m8.244s 00:10:44.727 user 0m16.537s 00:10:44.727 sys 0m4.175s 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:44.727 ************************************ 00:10:44.727 END TEST nvmf_bdev_io_wait 00:10:44.727 ************************************ 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:44.727 ************************************ 00:10:44.727 START TEST nvmf_queue_depth 00:10:44.727 ************************************ 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:44.727 * Looking for test storage... 00:10:44.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1689 -- # lcov --version 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:10:44.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.727 --rc genhtml_branch_coverage=1 00:10:44.727 --rc genhtml_function_coverage=1 00:10:44.727 --rc genhtml_legend=1 00:10:44.727 --rc geninfo_all_blocks=1 00:10:44.727 --rc geninfo_unexecuted_blocks=1 00:10:44.727 00:10:44.727 ' 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:10:44.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.727 --rc genhtml_branch_coverage=1 00:10:44.727 --rc genhtml_function_coverage=1 00:10:44.727 --rc genhtml_legend=1 00:10:44.727 --rc geninfo_all_blocks=1 00:10:44.727 --rc geninfo_unexecuted_blocks=1 00:10:44.727 00:10:44.727 ' 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:10:44.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.727 --rc genhtml_branch_coverage=1 00:10:44.727 --rc genhtml_function_coverage=1 00:10:44.727 --rc genhtml_legend=1 00:10:44.727 --rc geninfo_all_blocks=1 00:10:44.727 --rc geninfo_unexecuted_blocks=1 00:10:44.727 00:10:44.727 ' 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:10:44.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.727 --rc genhtml_branch_coverage=1 00:10:44.727 --rc genhtml_function_coverage=1 00:10:44.727 --rc genhtml_legend=1 00:10:44.727 --rc geninfo_all_blocks=1 00:10:44.727 --rc geninfo_unexecuted_blocks=1 00:10:44.727 00:10:44.727 ' 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.727 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:44.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:10:44.728 15:06:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:48.018 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:48.018 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:48.018 Found net devices under 0000:84:00.0: cvl_0_0 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:48.018 Found net devices under 0000:84:00.1: cvl_0_1 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.018 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:48.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:48.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:10:48.019 00:10:48.019 --- 10.0.0.2 ping statistics --- 00:10:48.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.019 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:48.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:48.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:10:48.019 00:10:48.019 --- 10.0.0.1 ping statistics --- 00:10:48.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.019 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3088321 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3088321 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3088321 ']' 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:48.019 [2024-10-28 15:06:34.496924] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:10:48.019 [2024-10-28 15:06:34.497040] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.019 [2024-10-28 15:06:34.606677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.019 [2024-10-28 15:06:34.683589] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.019 [2024-10-28 15:06:34.683672] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.019 [2024-10-28 15:06:34.683717] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:48.019 [2024-10-28 15:06:34.683743] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:48.019 [2024-10-28 15:06:34.683765] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.019 [2024-10-28 15:06:34.684672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:48.019 [2024-10-28 15:06:34.874871] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.019 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:48.279 Malloc0 00:10:48.279 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.280 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:48.280 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.280 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:48.280 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.280 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:48.280 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.280 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:48.280 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.280 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:48.280 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.280 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:48.280 [2024-10-28 15:06:34.949145] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:48.280 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.280 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3088386 00:10:48.280 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:48.280 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:48.280 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3088386 /var/tmp/bdevperf.sock 00:10:48.280 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3088386 ']' 00:10:48.280 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:48.280 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:48.280 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:48.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:48.280 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:48.280 15:06:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:48.280 [2024-10-28 15:06:35.060285] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:10:48.280 [2024-10-28 15:06:35.060442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3088386 ] 00:10:48.540 [2024-10-28 15:06:35.200830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.540 [2024-10-28 15:06:35.308820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.801 15:06:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:48.801 15:06:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:48.801 15:06:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:48.801 15:06:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.801 15:06:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:49.060 NVMe0n1 00:10:49.060 15:06:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.060 15:06:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:49.321 Running I/O for 10 seconds... 00:10:51.205 3268.00 IOPS, 12.77 MiB/s [2024-10-28T14:06:39.011Z] 3584.00 IOPS, 14.00 MiB/s [2024-10-28T14:06:40.394Z] 4096.00 IOPS, 16.00 MiB/s [2024-10-28T14:06:41.335Z] 4352.00 IOPS, 17.00 MiB/s [2024-10-28T14:06:42.275Z] 4301.40 IOPS, 16.80 MiB/s [2024-10-28T14:06:43.214Z] 4366.00 IOPS, 17.05 MiB/s [2024-10-28T14:06:44.153Z] 4498.29 IOPS, 17.57 MiB/s [2024-10-28T14:06:45.092Z] 4558.62 IOPS, 17.81 MiB/s [2024-10-28T14:06:46.473Z] 4504.33 IOPS, 17.60 MiB/s [2024-10-28T14:06:46.473Z] 4508.40 IOPS, 17.61 MiB/s 00:10:59.606 Latency(us) 00:10:59.606 [2024-10-28T14:06:46.473Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:59.606 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:59.606 Verification LBA range: start 0x0 length 0x4000 00:10:59.606 NVMe0n1 : 10.14 4545.58 17.76 0.00 0.00 223399.03 27379.48 160004.93 00:10:59.606 [2024-10-28T14:06:46.473Z] =================================================================================================================== 00:10:59.606 [2024-10-28T14:06:46.473Z] Total : 4545.58 17.76 0.00 0.00 223399.03 27379.48 160004.93 00:10:59.606 { 00:10:59.606 "results": [ 00:10:59.606 { 00:10:59.606 "job": "NVMe0n1", 00:10:59.606 "core_mask": "0x1", 00:10:59.606 "workload": "verify", 00:10:59.606 "status": "finished", 00:10:59.606 "verify_range": { 00:10:59.606 "start": 0, 00:10:59.606 "length": 16384 00:10:59.606 }, 00:10:59.606 "queue_depth": 1024, 00:10:59.607 "io_size": 4096, 00:10:59.607 "runtime": 10.13974, 00:10:59.607 "iops": 4545.580064183106, 00:10:59.607 "mibps": 17.756172125715256, 00:10:59.607 "io_failed": 0, 00:10:59.607 "io_timeout": 0, 00:10:59.607 "avg_latency_us": 223399.02508329332, 00:10:59.607 "min_latency_us": 27379.484444444446, 00:10:59.607 "max_latency_us": 160004.93037037036 00:10:59.607 } 00:10:59.607 ], 00:10:59.607 "core_count": 1 00:10:59.607 } 00:10:59.607 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3088386 00:10:59.607 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3088386 ']' 00:10:59.607 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3088386 00:10:59.607 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:59.607 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:59.607 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3088386 00:10:59.607 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:59.607 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:59.607 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3088386' 00:10:59.607 killing process with pid 3088386 00:10:59.607 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3088386 00:10:59.607 Received shutdown signal, test time was about 10.000000 seconds 00:10:59.607 00:10:59.607 Latency(us) 00:10:59.607 [2024-10-28T14:06:46.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:59.607 [2024-10-28T14:06:46.474Z] =================================================================================================================== 00:10:59.607 [2024-10-28T14:06:46.474Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:59.607 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3088386 00:10:59.866 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:59.866 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:59.866 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:59.866 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:59.866 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:59.866 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:59.866 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:59.866 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:59.866 rmmod nvme_tcp 00:10:59.866 rmmod nvme_fabrics 00:10:59.866 rmmod nvme_keyring 00:10:59.866 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:59.866 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:59.866 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:59.866 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3088321 ']' 00:10:59.866 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3088321 00:10:59.866 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3088321 ']' 00:10:59.866 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3088321 00:10:59.866 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:59.866 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:59.866 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3088321 00:10:59.866 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:59.866 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:59.866 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3088321' 00:10:59.866 killing process with pid 3088321 00:10:59.866 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3088321 00:10:59.866 15:06:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3088321 00:11:00.435 15:06:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:00.435 15:06:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:00.435 15:06:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:00.435 15:06:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:11:00.435 15:06:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:11:00.435 15:06:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:00.435 15:06:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:11:00.435 15:06:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:00.435 15:06:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:00.435 15:06:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.435 15:06:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.435 15:06:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.344 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:02.344 00:11:02.344 real 0m18.064s 00:11:02.344 user 0m24.473s 00:11:02.344 sys 0m4.365s 00:11:02.344 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:02.344 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:02.344 ************************************ 00:11:02.344 END TEST nvmf_queue_depth 00:11:02.344 ************************************ 00:11:02.345 15:06:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:02.345 15:06:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:02.345 15:06:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:02.345 15:06:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:02.345 ************************************ 00:11:02.345 START TEST nvmf_target_multipath 00:11:02.345 ************************************ 00:11:02.345 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:02.605 * Looking for test storage... 00:11:02.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.605 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:11:02.606 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1689 -- # lcov --version 00:11:02.606 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:11:02.606 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:11:02.606 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:02.606 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:02.606 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:02.606 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:11:02.606 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:11:02.606 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:11:02.606 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:11:02.606 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:11:02.606 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:11:02.606 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:11:02.606 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:02.606 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:11:02.606 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:11:02.606 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:02.606 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:02.606 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:11:02.606 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:11:02.606 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:02.606 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:11:02.606 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:11:02.606 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:11:02.606 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:11:02.606 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:11:02.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.875 --rc genhtml_branch_coverage=1 00:11:02.875 --rc genhtml_function_coverage=1 00:11:02.875 --rc genhtml_legend=1 00:11:02.875 --rc geninfo_all_blocks=1 00:11:02.875 --rc geninfo_unexecuted_blocks=1 00:11:02.875 00:11:02.875 ' 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:11:02.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.875 --rc genhtml_branch_coverage=1 00:11:02.875 --rc genhtml_function_coverage=1 00:11:02.875 --rc genhtml_legend=1 00:11:02.875 --rc geninfo_all_blocks=1 00:11:02.875 --rc geninfo_unexecuted_blocks=1 00:11:02.875 00:11:02.875 ' 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:11:02.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.875 --rc genhtml_branch_coverage=1 00:11:02.875 --rc genhtml_function_coverage=1 00:11:02.875 --rc genhtml_legend=1 00:11:02.875 --rc geninfo_all_blocks=1 00:11:02.875 --rc geninfo_unexecuted_blocks=1 00:11:02.875 00:11:02.875 ' 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:11:02.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.875 --rc genhtml_branch_coverage=1 00:11:02.875 --rc genhtml_function_coverage=1 00:11:02.875 --rc genhtml_legend=1 00:11:02.875 --rc geninfo_all_blocks=1 00:11:02.875 --rc geninfo_unexecuted_blocks=1 00:11:02.875 00:11:02.875 ' 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:02.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:11:02.875 15:06:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:05.486 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:05.486 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:05.486 Found net devices under 0000:84:00.0: cvl_0_0 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:05.486 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:05.487 Found net devices under 0000:84:00.1: cvl_0_1 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:05.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:05.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:11:05.487 00:11:05.487 --- 10.0.0.2 ping statistics --- 00:11:05.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.487 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:05.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:05.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:11:05.487 00:11:05.487 --- 10.0.0.1 ping statistics --- 00:11:05.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.487 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:11:05.487 only one NIC for nvmf test 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:05.487 rmmod nvme_tcp 00:11:05.487 rmmod nvme_fabrics 00:11:05.487 rmmod nvme_keyring 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:05.487 15:06:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.033 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:08.033 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:11:08.033 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:11:08.033 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:08.033 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:08.033 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:08.033 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:08.033 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:08.033 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:08.033 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:08.033 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:08.033 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:08.033 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:08.033 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:08.033 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:08.033 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:08.033 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:08.033 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:08.033 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:08.033 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:08.033 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:08.033 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:08.033 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:08.034 00:11:08.034 real 0m5.230s 00:11:08.034 user 0m1.103s 00:11:08.034 sys 0m2.151s 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:08.034 ************************************ 00:11:08.034 END TEST nvmf_target_multipath 00:11:08.034 ************************************ 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:08.034 ************************************ 00:11:08.034 START TEST nvmf_zcopy 00:11:08.034 ************************************ 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:08.034 * Looking for test storage... 00:11:08.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1689 -- # lcov --version 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:11:08.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.034 --rc genhtml_branch_coverage=1 00:11:08.034 --rc genhtml_function_coverage=1 00:11:08.034 --rc genhtml_legend=1 00:11:08.034 --rc geninfo_all_blocks=1 00:11:08.034 --rc geninfo_unexecuted_blocks=1 00:11:08.034 00:11:08.034 ' 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:11:08.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.034 --rc genhtml_branch_coverage=1 00:11:08.034 --rc genhtml_function_coverage=1 00:11:08.034 --rc genhtml_legend=1 00:11:08.034 --rc geninfo_all_blocks=1 00:11:08.034 --rc geninfo_unexecuted_blocks=1 00:11:08.034 00:11:08.034 ' 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:11:08.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.034 --rc genhtml_branch_coverage=1 00:11:08.034 --rc genhtml_function_coverage=1 00:11:08.034 --rc genhtml_legend=1 00:11:08.034 --rc geninfo_all_blocks=1 00:11:08.034 --rc geninfo_unexecuted_blocks=1 00:11:08.034 00:11:08.034 ' 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:11:08.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.034 --rc genhtml_branch_coverage=1 00:11:08.034 --rc genhtml_function_coverage=1 00:11:08.034 --rc genhtml_legend=1 00:11:08.034 --rc geninfo_all_blocks=1 00:11:08.034 --rc geninfo_unexecuted_blocks=1 00:11:08.034 00:11:08.034 ' 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:08.034 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:08.035 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.035 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.035 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:08.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:08.035 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:08.035 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:08.035 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:08.035 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:08.035 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:08.035 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:08.035 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:08.035 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:08.035 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:08.035 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.035 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.035 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.035 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:08.035 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:08.035 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:11:08.035 15:06:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:11.329 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:11.329 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:11:11.329 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:11.329 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:11.329 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:11.329 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:11.329 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:11.329 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:11:11.329 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:11.329 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:11:11.329 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:11:11.329 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:11:11.329 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:11:11.329 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:11:11.329 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:11:11.329 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:11.329 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:11.329 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:11.329 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:11.329 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:11.329 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:11.329 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:11.329 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:11.329 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:11.330 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:11.330 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:11.330 Found net devices under 0000:84:00.0: cvl_0_0 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:11.330 Found net devices under 0000:84:00.1: cvl_0_1 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:11.330 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:11.330 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:11:11.330 00:11:11.330 --- 10.0.0.2 ping statistics --- 00:11:11.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.330 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:11.330 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:11.330 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:11:11.330 00:11:11.330 --- 10.0.0.1 ping statistics --- 00:11:11.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.330 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3093767 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3093767 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 3093767 ']' 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:11.330 15:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:11.330 [2024-10-28 15:06:57.867077] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:11:11.330 [2024-10-28 15:06:57.867182] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.330 [2024-10-28 15:06:57.995370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.330 [2024-10-28 15:06:58.107134] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.330 [2024-10-28 15:06:58.107259] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.330 [2024-10-28 15:06:58.107316] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:11.330 [2024-10-28 15:06:58.107364] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:11.330 [2024-10-28 15:06:58.107409] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.330 [2024-10-28 15:06:58.108775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.591 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:11.591 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:11:11.591 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:11.591 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:11.591 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:11.591 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.591 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:11.591 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:11.591 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.591 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:11.591 [2024-10-28 15:06:58.401019] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:11.591 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.591 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:11.591 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.591 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:11.591 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.591 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:11.591 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.591 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:11.591 [2024-10-28 15:06:58.426882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:11.591 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.591 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:11.591 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.591 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:11.591 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.591 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:11.591 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.591 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:11.851 malloc0 00:11:11.851 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.851 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:11.851 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.851 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:11.851 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.852 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:11.852 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:11.852 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:11.852 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:11.852 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:11.852 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:11.852 { 00:11:11.852 "params": { 00:11:11.852 "name": "Nvme$subsystem", 00:11:11.852 "trtype": "$TEST_TRANSPORT", 00:11:11.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:11.852 "adrfam": "ipv4", 00:11:11.852 "trsvcid": "$NVMF_PORT", 00:11:11.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:11.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:11.852 "hdgst": ${hdgst:-false}, 00:11:11.852 "ddgst": ${ddgst:-false} 00:11:11.852 }, 00:11:11.852 "method": "bdev_nvme_attach_controller" 00:11:11.852 } 00:11:11.852 EOF 00:11:11.852 )") 00:11:11.852 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:11.852 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:11.852 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:11.852 15:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:11.852 "params": { 00:11:11.852 "name": "Nvme1", 00:11:11.852 "trtype": "tcp", 00:11:11.852 "traddr": "10.0.0.2", 00:11:11.852 "adrfam": "ipv4", 00:11:11.852 "trsvcid": "4420", 00:11:11.852 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:11.852 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:11.852 "hdgst": false, 00:11:11.852 "ddgst": false 00:11:11.852 }, 00:11:11.852 "method": "bdev_nvme_attach_controller" 00:11:11.852 }' 00:11:11.852 [2024-10-28 15:06:58.543615] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:11:11.852 [2024-10-28 15:06:58.543728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3093913 ] 00:11:11.852 [2024-10-28 15:06:58.675441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.112 [2024-10-28 15:06:58.799720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.371 Running I/O for 10 seconds... 00:11:14.700 2426.00 IOPS, 18.95 MiB/s [2024-10-28T14:07:02.137Z] 2458.00 IOPS, 19.20 MiB/s [2024-10-28T14:07:03.519Z] 2489.00 IOPS, 19.45 MiB/s [2024-10-28T14:07:04.459Z] 2472.75 IOPS, 19.32 MiB/s [2024-10-28T14:07:05.401Z] 2473.40 IOPS, 19.32 MiB/s [2024-10-28T14:07:06.343Z] 2471.17 IOPS, 19.31 MiB/s [2024-10-28T14:07:07.283Z] 2468.14 IOPS, 19.28 MiB/s [2024-10-28T14:07:08.223Z] 2462.25 IOPS, 19.24 MiB/s [2024-10-28T14:07:09.170Z] 2473.89 IOPS, 19.33 MiB/s [2024-10-28T14:07:09.430Z] 2492.80 IOPS, 19.48 MiB/s 00:11:22.563 Latency(us) 00:11:22.563 [2024-10-28T14:07:09.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:22.563 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:22.563 Verification LBA range: start 0x0 length 0x1000 00:11:22.563 Nvme1n1 : 10.04 2495.15 19.49 0.00 0.00 51106.58 7475.96 65633.09 00:11:22.563 [2024-10-28T14:07:09.430Z] =================================================================================================================== 00:11:22.563 [2024-10-28T14:07:09.430Z] Total : 2495.15 19.49 0.00 0.00 51106.58 7475.96 65633.09 00:11:22.823 15:07:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3095196 00:11:22.823 15:07:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:22.823 15:07:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:22.823 15:07:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:22.823 15:07:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:22.823 15:07:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:22.823 15:07:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:22.823 15:07:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:22.823 15:07:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:22.823 { 00:11:22.823 "params": { 00:11:22.823 "name": "Nvme$subsystem", 00:11:22.823 "trtype": "$TEST_TRANSPORT", 00:11:22.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:22.823 "adrfam": "ipv4", 00:11:22.823 "trsvcid": "$NVMF_PORT", 00:11:22.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:22.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:22.823 "hdgst": ${hdgst:-false}, 00:11:22.823 "ddgst": ${ddgst:-false} 00:11:22.823 }, 00:11:22.823 "method": "bdev_nvme_attach_controller" 00:11:22.823 } 00:11:22.823 EOF 00:11:22.823 )") 00:11:22.823 [2024-10-28 15:07:09.520043] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.823 15:07:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:22.823 [2024-10-28 15:07:09.520155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.823 15:07:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:22.823 15:07:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:22.823 15:07:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:22.823 "params": { 00:11:22.823 "name": "Nvme1", 00:11:22.823 "trtype": "tcp", 00:11:22.823 "traddr": "10.0.0.2", 00:11:22.823 "adrfam": "ipv4", 00:11:22.823 "trsvcid": "4420", 00:11:22.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:22.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:22.823 "hdgst": false, 00:11:22.823 "ddgst": false 00:11:22.823 }, 00:11:22.823 "method": "bdev_nvme_attach_controller" 00:11:22.823 }' 00:11:22.823 [2024-10-28 15:07:09.531982] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.823 [2024-10-28 15:07:09.532021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.823 [2024-10-28 15:07:09.544006] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.823 [2024-10-28 15:07:09.544070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.823 [2024-10-28 15:07:09.556120] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.823 [2024-10-28 15:07:09.556183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.823 [2024-10-28 15:07:09.568159] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.823 [2024-10-28 15:07:09.568222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.823 [2024-10-28 15:07:09.580193] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.823 [2024-10-28 15:07:09.580255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.823 [2024-10-28 15:07:09.592225] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.823 [2024-10-28 15:07:09.592287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.823 [2024-10-28 15:07:09.596226] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:11:22.823 [2024-10-28 15:07:09.596403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3095196 ] 00:11:22.823 [2024-10-28 15:07:09.604270] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.823 [2024-10-28 15:07:09.604342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.823 [2024-10-28 15:07:09.616306] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.823 [2024-10-28 15:07:09.616372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.823 [2024-10-28 15:07:09.628337] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.823 [2024-10-28 15:07:09.628400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.823 [2024-10-28 15:07:09.640375] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.823 [2024-10-28 15:07:09.640436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.823 [2024-10-28 15:07:09.652410] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.823 [2024-10-28 15:07:09.652471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.823 [2024-10-28 15:07:09.664447] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.823 [2024-10-28 15:07:09.664507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.823 [2024-10-28 15:07:09.676484] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.823 [2024-10-28 15:07:09.676544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.823 [2024-10-28 15:07:09.688523] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.823 [2024-10-28 15:07:09.688584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.083 [2024-10-28 15:07:09.700561] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.083 [2024-10-28 15:07:09.700623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.083 [2024-10-28 15:07:09.712603] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.083 [2024-10-28 15:07:09.712696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.083 [2024-10-28 15:07:09.724478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.083 [2024-10-28 15:07:09.724679] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.083 [2024-10-28 15:07:09.724728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.083 [2024-10-28 15:07:09.736719] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.083 [2024-10-28 15:07:09.736756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.083 [2024-10-28 15:07:09.748751] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.083 [2024-10-28 15:07:09.748815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.083 [2024-10-28 15:07:09.760734] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.083 [2024-10-28 15:07:09.760763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.083 [2024-10-28 15:07:09.772773] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.083 [2024-10-28 15:07:09.772801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.083 [2024-10-28 15:07:09.784776] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.083 [2024-10-28 15:07:09.784805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.083 [2024-10-28 15:07:09.796786] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.083 [2024-10-28 15:07:09.796814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.083 [2024-10-28 15:07:09.808815] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.083 [2024-10-28 15:07:09.808842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.083 [2024-10-28 15:07:09.820849] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.083 [2024-10-28 15:07:09.820877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.083 [2024-10-28 15:07:09.830924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.083 [2024-10-28 15:07:09.832884] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.083 [2024-10-28 15:07:09.832912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.083 [2024-10-28 15:07:09.844913] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.083 [2024-10-28 15:07:09.844941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.083 [2024-10-28 15:07:09.857023] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.083 [2024-10-28 15:07:09.857099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.083 [2024-10-28 15:07:09.869083] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.083 [2024-10-28 15:07:09.869160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.083 [2024-10-28 15:07:09.881164] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.083 [2024-10-28 15:07:09.881239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.083 [2024-10-28 15:07:09.893200] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.083 [2024-10-28 15:07:09.893275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.083 [2024-10-28 15:07:09.905239] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.083 [2024-10-28 15:07:09.905316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.083 [2024-10-28 15:07:09.917269] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.083 [2024-10-28 15:07:09.917348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.083 [2024-10-28 15:07:09.929291] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.083 [2024-10-28 15:07:09.929357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.083 [2024-10-28 15:07:09.941318] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.083 [2024-10-28 15:07:09.941379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.343 [2024-10-28 15:07:09.953391] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.343 [2024-10-28 15:07:09.953466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.343 [2024-10-28 15:07:09.965432] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.343 [2024-10-28 15:07:09.965510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.343 [2024-10-28 15:07:09.977454] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.343 [2024-10-28 15:07:09.977526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.343 [2024-10-28 15:07:09.989462] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.343 [2024-10-28 15:07:09.989524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.343 [2024-10-28 15:07:10.001510] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.343 [2024-10-28 15:07:10.001574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.343 [2024-10-28 15:07:10.009443] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.343 [2024-10-28 15:07:10.009481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.343 [2024-10-28 15:07:10.021585] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.343 [2024-10-28 15:07:10.021673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.343 [2024-10-28 15:07:10.029490] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.343 [2024-10-28 15:07:10.029520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.343 [2024-10-28 15:07:10.041523] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.343 [2024-10-28 15:07:10.041553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.343 [2024-10-28 15:07:10.049548] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.343 [2024-10-28 15:07:10.049578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.343 [2024-10-28 15:07:10.057565] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.343 [2024-10-28 15:07:10.057594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.343 [2024-10-28 15:07:10.065587] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.343 [2024-10-28 15:07:10.065614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.343 [2024-10-28 15:07:10.073612] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.343 [2024-10-28 15:07:10.073639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.343 [2024-10-28 15:07:10.085762] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.343 [2024-10-28 15:07:10.085799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.343 [2024-10-28 15:07:10.097786] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.343 [2024-10-28 15:07:10.097825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.343 [2024-10-28 15:07:10.105837] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.343 [2024-10-28 15:07:10.105933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.343 [2024-10-28 15:07:10.117895] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.343 [2024-10-28 15:07:10.117966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.343 [2024-10-28 15:07:10.129827] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.343 [2024-10-28 15:07:10.129863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.343 Running I/O for 5 seconds... 00:11:23.343 [2024-10-28 15:07:10.143490] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.343 [2024-10-28 15:07:10.143563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.343 [2024-10-28 15:07:10.163873] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.343 [2024-10-28 15:07:10.163945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.343 [2024-10-28 15:07:10.190322] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.343 [2024-10-28 15:07:10.190382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.603 [2024-10-28 15:07:10.214123] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.603 [2024-10-28 15:07:10.214196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.603 [2024-10-28 15:07:10.237497] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.603 [2024-10-28 15:07:10.237567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.603 [2024-10-28 15:07:10.262027] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.603 [2024-10-28 15:07:10.262099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.603 [2024-10-28 15:07:10.284875] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.603 [2024-10-28 15:07:10.284915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.603 [2024-10-28 15:07:10.303275] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.603 [2024-10-28 15:07:10.303347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.603 [2024-10-28 15:07:10.320061] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.603 [2024-10-28 15:07:10.320100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.603 [2024-10-28 15:07:10.342883] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.603 [2024-10-28 15:07:10.342925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.603 [2024-10-28 15:07:10.367563] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.603 [2024-10-28 15:07:10.367635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.603 [2024-10-28 15:07:10.392166] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.603 [2024-10-28 15:07:10.392237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.603 [2024-10-28 15:07:10.414782] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.603 [2024-10-28 15:07:10.414854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.603 [2024-10-28 15:07:10.433591] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.603 [2024-10-28 15:07:10.433690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.603 [2024-10-28 15:07:10.456310] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.603 [2024-10-28 15:07:10.456383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.863 [2024-10-28 15:07:10.480167] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.863 [2024-10-28 15:07:10.480238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.863 [2024-10-28 15:07:10.503142] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.863 [2024-10-28 15:07:10.503238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.863 [2024-10-28 15:07:10.527401] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.863 [2024-10-28 15:07:10.527470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.863 [2024-10-28 15:07:10.551664] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.863 [2024-10-28 15:07:10.551703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.863 [2024-10-28 15:07:10.575742] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.863 [2024-10-28 15:07:10.575786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.863 [2024-10-28 15:07:10.597627] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.863 [2024-10-28 15:07:10.597671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.863 [2024-10-28 15:07:10.621269] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.863 [2024-10-28 15:07:10.621342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.863 [2024-10-28 15:07:10.644140] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.863 [2024-10-28 15:07:10.644212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.863 [2024-10-28 15:07:10.668147] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.863 [2024-10-28 15:07:10.668221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.863 [2024-10-28 15:07:10.692472] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.863 [2024-10-28 15:07:10.692512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.863 [2024-10-28 15:07:10.715779] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.863 [2024-10-28 15:07:10.715812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.123 [2024-10-28 15:07:10.737383] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.123 [2024-10-28 15:07:10.737416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.123 [2024-10-28 15:07:10.757314] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.123 [2024-10-28 15:07:10.757387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.123 [2024-10-28 15:07:10.779108] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.123 [2024-10-28 15:07:10.779184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.123 [2024-10-28 15:07:10.797554] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.123 [2024-10-28 15:07:10.797626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.123 [2024-10-28 15:07:10.819323] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.123 [2024-10-28 15:07:10.819395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.123 [2024-10-28 15:07:10.841134] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.123 [2024-10-28 15:07:10.841207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.123 [2024-10-28 15:07:10.863495] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.123 [2024-10-28 15:07:10.863567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.123 [2024-10-28 15:07:10.886084] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.123 [2024-10-28 15:07:10.886156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.123 [2024-10-28 15:07:10.904945] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.123 [2024-10-28 15:07:10.905016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.123 [2024-10-28 15:07:10.926833] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.123 [2024-10-28 15:07:10.926923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.123 [2024-10-28 15:07:10.948920] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.123 [2024-10-28 15:07:10.948974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.123 [2024-10-28 15:07:10.970429] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.123 [2024-10-28 15:07:10.970502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.383 [2024-10-28 15:07:10.992319] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.383 [2024-10-28 15:07:10.992392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.383 [2024-10-28 15:07:11.017510] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.383 [2024-10-28 15:07:11.017583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.383 [2024-10-28 15:07:11.042376] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.383 [2024-10-28 15:07:11.042448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.383 [2024-10-28 15:07:11.067497] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.383 [2024-10-28 15:07:11.067568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.383 [2024-10-28 15:07:11.091516] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.383 [2024-10-28 15:07:11.091588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.383 [2024-10-28 15:07:11.116208] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.383 [2024-10-28 15:07:11.116280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.383 [2024-10-28 15:07:11.141504] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.383 [2024-10-28 15:07:11.141579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.383 5438.00 IOPS, 42.48 MiB/s [2024-10-28T14:07:11.250Z] [2024-10-28 15:07:11.167452] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.383 [2024-10-28 15:07:11.167523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.383 [2024-10-28 15:07:11.190161] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.383 [2024-10-28 15:07:11.190231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.383 [2024-10-28 15:07:11.214899] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.383 [2024-10-28 15:07:11.214932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.383 [2024-10-28 15:07:11.238389] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.383 [2024-10-28 15:07:11.238461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.644 [2024-10-28 15:07:11.257255] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.644 [2024-10-28 15:07:11.257327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.644 [2024-10-28 15:07:11.280558] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.644 [2024-10-28 15:07:11.280629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.644 [2024-10-28 15:07:11.305841] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.644 [2024-10-28 15:07:11.305913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.644 [2024-10-28 15:07:11.331156] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.644 [2024-10-28 15:07:11.331227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.644 [2024-10-28 15:07:11.356010] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.644 [2024-10-28 15:07:11.356083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.644 [2024-10-28 15:07:11.382604] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.644 [2024-10-28 15:07:11.382696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.644 [2024-10-28 15:07:11.408007] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.644 [2024-10-28 15:07:11.408039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.644 [2024-10-28 15:07:11.432972] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.644 [2024-10-28 15:07:11.433045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.644 [2024-10-28 15:07:11.458542] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.644 [2024-10-28 15:07:11.458620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.644 [2024-10-28 15:07:11.484492] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.644 [2024-10-28 15:07:11.484564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.644 [2024-10-28 15:07:11.509163] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.644 [2024-10-28 15:07:11.509233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.905 [2024-10-28 15:07:11.534095] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.905 [2024-10-28 15:07:11.534165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.905 [2024-10-28 15:07:11.559638] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.905 [2024-10-28 15:07:11.559731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.905 [2024-10-28 15:07:11.585322] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.905 [2024-10-28 15:07:11.585394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.905 [2024-10-28 15:07:11.611333] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.905 [2024-10-28 15:07:11.611405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.905 [2024-10-28 15:07:11.636733] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.905 [2024-10-28 15:07:11.636806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.905 [2024-10-28 15:07:11.661046] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.905 [2024-10-28 15:07:11.661117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.905 [2024-10-28 15:07:11.684440] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.905 [2024-10-28 15:07:11.684511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.905 [2024-10-28 15:07:11.708555] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.905 [2024-10-28 15:07:11.708629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.905 [2024-10-28 15:07:11.732508] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.905 [2024-10-28 15:07:11.732582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.905 [2024-10-28 15:07:11.757245] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.905 [2024-10-28 15:07:11.757318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.164 [2024-10-28 15:07:11.782375] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.164 [2024-10-28 15:07:11.782455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.164 [2024-10-28 15:07:11.807021] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.164 [2024-10-28 15:07:11.807103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.164 [2024-10-28 15:07:11.831767] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.164 [2024-10-28 15:07:11.831840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.164 [2024-10-28 15:07:11.855936] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.164 [2024-10-28 15:07:11.856010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.164 [2024-10-28 15:07:11.881456] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.164 [2024-10-28 15:07:11.881529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.165 [2024-10-28 15:07:11.907180] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.165 [2024-10-28 15:07:11.907252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.165 [2024-10-28 15:07:11.932700] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.165 [2024-10-28 15:07:11.932793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.165 [2024-10-28 15:07:11.957195] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.165 [2024-10-28 15:07:11.957267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.165 [2024-10-28 15:07:11.981884] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.165 [2024-10-28 15:07:11.981917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.165 [2024-10-28 15:07:12.006980] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.165 [2024-10-28 15:07:12.007050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.426 [2024-10-28 15:07:12.032262] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.426 [2024-10-28 15:07:12.032333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.426 [2024-10-28 15:07:12.056965] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.426 [2024-10-28 15:07:12.056997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.426 [2024-10-28 15:07:12.082741] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.426 [2024-10-28 15:07:12.082822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.426 [2024-10-28 15:07:12.106796] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.426 [2024-10-28 15:07:12.106828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.426 [2024-10-28 15:07:12.130569] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.426 [2024-10-28 15:07:12.130641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.426 5260.50 IOPS, 41.10 MiB/s [2024-10-28T14:07:12.293Z] [2024-10-28 15:07:12.155810] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.426 [2024-10-28 15:07:12.155881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.426 [2024-10-28 15:07:12.180097] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.426 [2024-10-28 15:07:12.180167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.426 [2024-10-28 15:07:12.205180] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.426 [2024-10-28 15:07:12.205252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.426 [2024-10-28 15:07:12.230408] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.426 [2024-10-28 15:07:12.230480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.426 [2024-10-28 15:07:12.255258] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.426 [2024-10-28 15:07:12.255328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.426 [2024-10-28 15:07:12.280502] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.426 [2024-10-28 15:07:12.280572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.687 [2024-10-28 15:07:12.306071] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.687 [2024-10-28 15:07:12.306157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.687 [2024-10-28 15:07:12.331873] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.687 [2024-10-28 15:07:12.331944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.687 [2024-10-28 15:07:12.351162] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.687 [2024-10-28 15:07:12.351233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.687 [2024-10-28 15:07:12.375410] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.687 [2024-10-28 15:07:12.375484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.687 [2024-10-28 15:07:12.401109] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.687 [2024-10-28 15:07:12.401181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.687 [2024-10-28 15:07:12.418893] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.687 [2024-10-28 15:07:12.418967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.687 [2024-10-28 15:07:12.441430] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.687 [2024-10-28 15:07:12.441503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.687 [2024-10-28 15:07:12.466157] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.687 [2024-10-28 15:07:12.466229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.687 [2024-10-28 15:07:12.490888] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.687 [2024-10-28 15:07:12.490920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.687 [2024-10-28 15:07:12.515090] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.687 [2024-10-28 15:07:12.515165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.687 [2024-10-28 15:07:12.540816] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.687 [2024-10-28 15:07:12.540888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.948 [2024-10-28 15:07:12.565828] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.948 [2024-10-28 15:07:12.565860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.948 [2024-10-28 15:07:12.590991] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.948 [2024-10-28 15:07:12.591063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.948 [2024-10-28 15:07:12.615381] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.948 [2024-10-28 15:07:12.615452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.948 [2024-10-28 15:07:12.639373] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.948 [2024-10-28 15:07:12.639404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.948 [2024-10-28 15:07:12.655808] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.948 [2024-10-28 15:07:12.655849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.948 [2024-10-28 15:07:12.681801] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.948 [2024-10-28 15:07:12.681874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.948 [2024-10-28 15:07:12.706971] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.948 [2024-10-28 15:07:12.707043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.948 [2024-10-28 15:07:12.732293] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.948 [2024-10-28 15:07:12.732365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.948 [2024-10-28 15:07:12.757479] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.948 [2024-10-28 15:07:12.757565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.948 [2024-10-28 15:07:12.781121] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.948 [2024-10-28 15:07:12.781193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.948 [2024-10-28 15:07:12.806447] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.948 [2024-10-28 15:07:12.806519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.208 [2024-10-28 15:07:12.832076] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.208 [2024-10-28 15:07:12.832147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.208 [2024-10-28 15:07:12.856825] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.208 [2024-10-28 15:07:12.856896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.208 [2024-10-28 15:07:12.883237] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.208 [2024-10-28 15:07:12.883311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.208 [2024-10-28 15:07:12.908054] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.208 [2024-10-28 15:07:12.908128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.208 [2024-10-28 15:07:12.933807] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.208 [2024-10-28 15:07:12.933839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.208 [2024-10-28 15:07:12.959723] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.208 [2024-10-28 15:07:12.959756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.208 [2024-10-28 15:07:12.986095] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.208 [2024-10-28 15:07:12.986167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.208 [2024-10-28 15:07:13.011133] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.208 [2024-10-28 15:07:13.011205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.208 [2024-10-28 15:07:13.036810] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.208 [2024-10-28 15:07:13.036884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.208 [2024-10-28 15:07:13.063264] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.208 [2024-10-28 15:07:13.063338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.469 [2024-10-28 15:07:13.088883] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.469 [2024-10-28 15:07:13.088955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.469 [2024-10-28 15:07:13.113622] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.469 [2024-10-28 15:07:13.113714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.469 [2024-10-28 15:07:13.137671] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.469 [2024-10-28 15:07:13.137726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.469 5218.33 IOPS, 40.77 MiB/s [2024-10-28T14:07:13.336Z] [2024-10-28 15:07:13.162120] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.469 [2024-10-28 15:07:13.162191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.469 [2024-10-28 15:07:13.187614] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.469 [2024-10-28 15:07:13.187705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.469 [2024-10-28 15:07:13.212037] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.469 [2024-10-28 15:07:13.212109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.469 [2024-10-28 15:07:13.235796] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.469 [2024-10-28 15:07:13.235839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.469 [2024-10-28 15:07:13.260794] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.469 [2024-10-28 15:07:13.260826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.469 [2024-10-28 15:07:13.285638] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.469 [2024-10-28 15:07:13.285728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.469 [2024-10-28 15:07:13.310095] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.469 [2024-10-28 15:07:13.310168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.469 [2024-10-28 15:07:13.334257] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.469 [2024-10-28 15:07:13.334329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-10-28 15:07:13.358769] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-10-28 15:07:13.358841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-10-28 15:07:13.383849] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-10-28 15:07:13.383921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-10-28 15:07:13.407950] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-10-28 15:07:13.408022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-10-28 15:07:13.432284] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-10-28 15:07:13.432356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-10-28 15:07:13.456437] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-10-28 15:07:13.456509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-10-28 15:07:13.480610] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-10-28 15:07:13.480709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-10-28 15:07:13.505731] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-10-28 15:07:13.505763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-10-28 15:07:13.531709] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-10-28 15:07:13.531783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-10-28 15:07:13.556049] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-10-28 15:07:13.556121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.730 [2024-10-28 15:07:13.580231] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.730 [2024-10-28 15:07:13.580304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.990 [2024-10-28 15:07:13.605238] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.990 [2024-10-28 15:07:13.605311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.990 [2024-10-28 15:07:13.629952] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.990 [2024-10-28 15:07:13.630039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.990 [2024-10-28 15:07:13.653486] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.990 [2024-10-28 15:07:13.653558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.990 [2024-10-28 15:07:13.678239] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.990 [2024-10-28 15:07:13.678310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.990 [2024-10-28 15:07:13.703046] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.990 [2024-10-28 15:07:13.703117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.990 [2024-10-28 15:07:13.727080] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.990 [2024-10-28 15:07:13.727154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.990 [2024-10-28 15:07:13.751453] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.990 [2024-10-28 15:07:13.751527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.990 [2024-10-28 15:07:13.776602] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.990 [2024-10-28 15:07:13.776689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.990 [2024-10-28 15:07:13.801585] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.990 [2024-10-28 15:07:13.801673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.990 [2024-10-28 15:07:13.825905] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.990 [2024-10-28 15:07:13.825978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.990 [2024-10-28 15:07:13.851516] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.990 [2024-10-28 15:07:13.851588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.250 [2024-10-28 15:07:13.876257] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.250 [2024-10-28 15:07:13.876328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.250 [2024-10-28 15:07:13.901067] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.251 [2024-10-28 15:07:13.901149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.251 [2024-10-28 15:07:13.926512] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.251 [2024-10-28 15:07:13.926584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.251 [2024-10-28 15:07:13.951346] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.251 [2024-10-28 15:07:13.951417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.251 [2024-10-28 15:07:13.976123] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.251 [2024-10-28 15:07:13.976194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.251 [2024-10-28 15:07:14.000794] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.251 [2024-10-28 15:07:14.000826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.251 [2024-10-28 15:07:14.025773] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.251 [2024-10-28 15:07:14.025806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.251 [2024-10-28 15:07:14.050860] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.251 [2024-10-28 15:07:14.050936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.251 [2024-10-28 15:07:14.076483] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.251 [2024-10-28 15:07:14.076555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.251 [2024-10-28 15:07:14.101196] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.251 [2024-10-28 15:07:14.101268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.511 [2024-10-28 15:07:14.123869] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.511 [2024-10-28 15:07:14.123911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.511 [2024-10-28 15:07:14.145695] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.511 [2024-10-28 15:07:14.145775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.511 5215.50 IOPS, 40.75 MiB/s [2024-10-28T14:07:14.378Z] [2024-10-28 15:07:14.173058] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.511 [2024-10-28 15:07:14.173131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.511 [2024-10-28 15:07:14.199269] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.511 [2024-10-28 15:07:14.199342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.511 [2024-10-28 15:07:14.219872] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.511 [2024-10-28 15:07:14.219944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.511 [2024-10-28 15:07:14.245173] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.511 [2024-10-28 15:07:14.245245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.511 [2024-10-28 15:07:14.272800] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.511 [2024-10-28 15:07:14.272870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.511 [2024-10-28 15:07:14.298970] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.511 [2024-10-28 15:07:14.299041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.511 [2024-10-28 15:07:14.325365] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.511 [2024-10-28 15:07:14.325441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.511 [2024-10-28 15:07:14.352875] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.511 [2024-10-28 15:07:14.352946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.773 [2024-10-28 15:07:14.378728] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.773 [2024-10-28 15:07:14.378800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.773 [2024-10-28 15:07:14.404897] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.773 [2024-10-28 15:07:14.404981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.773 [2024-10-28 15:07:14.431171] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.773 [2024-10-28 15:07:14.431242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.773 [2024-10-28 15:07:14.457567] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.773 [2024-10-28 15:07:14.457637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.773 [2024-10-28 15:07:14.484041] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.773 [2024-10-28 15:07:14.484113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.773 [2024-10-28 15:07:14.510624] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.773 [2024-10-28 15:07:14.510714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.773 [2024-10-28 15:07:14.538637] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.773 [2024-10-28 15:07:14.538725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.773 [2024-10-28 15:07:14.564205] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.773 [2024-10-28 15:07:14.564277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.773 [2024-10-28 15:07:14.590970] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.773 [2024-10-28 15:07:14.591043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.773 [2024-10-28 15:07:14.617672] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.773 [2024-10-28 15:07:14.617742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.033 [2024-10-28 15:07:14.641906] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.033 [2024-10-28 15:07:14.641947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.033 [2024-10-28 15:07:14.660968] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.033 [2024-10-28 15:07:14.661042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.033 [2024-10-28 15:07:14.681071] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.033 [2024-10-28 15:07:14.681144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.033 [2024-10-28 15:07:14.707047] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.033 [2024-10-28 15:07:14.707118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.033 [2024-10-28 15:07:14.733849] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.033 [2024-10-28 15:07:14.733920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.033 [2024-10-28 15:07:14.760451] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.033 [2024-10-28 15:07:14.760523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.033 [2024-10-28 15:07:14.787840] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.033 [2024-10-28 15:07:14.787871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.033 [2024-10-28 15:07:14.814053] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.033 [2024-10-28 15:07:14.814124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.033 [2024-10-28 15:07:14.840865] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.033 [2024-10-28 15:07:14.840938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.033 [2024-10-28 15:07:14.861171] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.033 [2024-10-28 15:07:14.861241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.033 [2024-10-28 15:07:14.887019] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.033 [2024-10-28 15:07:14.887090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.293 [2024-10-28 15:07:14.912515] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.293 [2024-10-28 15:07:14.912586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.293 [2024-10-28 15:07:14.939174] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.293 [2024-10-28 15:07:14.939248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.293 [2024-10-28 15:07:14.966018] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.293 [2024-10-28 15:07:14.966089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.293 [2024-10-28 15:07:14.986673] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.293 [2024-10-28 15:07:14.986704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.293 [2024-10-28 15:07:15.009141] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.293 [2024-10-28 15:07:15.009212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.293 [2024-10-28 15:07:15.035413] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.293 [2024-10-28 15:07:15.035487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.293 [2024-10-28 15:07:15.061545] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.293 [2024-10-28 15:07:15.061618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.293 [2024-10-28 15:07:15.089604] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.293 [2024-10-28 15:07:15.089696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.293 [2024-10-28 15:07:15.115766] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.293 [2024-10-28 15:07:15.115853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.293 [2024-10-28 15:07:15.142167] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.293 [2024-10-28 15:07:15.142239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.293 5138.00 IOPS, 40.14 MiB/s [2024-10-28T14:07:15.160Z] [2024-10-28 15:07:15.156992] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.293 [2024-10-28 15:07:15.157025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.555 00:11:28.555 Latency(us) 00:11:28.555 [2024-10-28T14:07:15.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:28.555 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:28.555 Nvme1n1 : 5.01 5153.63 40.26 0.00 0.00 24802.83 4344.79 41748.86 00:11:28.555 [2024-10-28T14:07:15.422Z] =================================================================================================================== 00:11:28.555 [2024-10-28T14:07:15.422Z] Total : 5153.63 40.26 0.00 0.00 24802.83 4344.79 41748.86 00:11:28.555 [2024-10-28 15:07:15.168197] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.555 [2024-10-28 15:07:15.168266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.555 [2024-10-28 15:07:15.180233] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.555 [2024-10-28 15:07:15.180303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.555 [2024-10-28 15:07:15.192141] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.555 [2024-10-28 15:07:15.192172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.555 [2024-10-28 15:07:15.200221] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.555 [2024-10-28 15:07:15.200275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.555 [2024-10-28 15:07:15.208259] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.555 [2024-10-28 15:07:15.208324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.555 [2024-10-28 15:07:15.216268] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.555 [2024-10-28 15:07:15.216326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.555 [2024-10-28 15:07:15.224308] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.555 [2024-10-28 15:07:15.224361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.555 [2024-10-28 15:07:15.232298] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.555 [2024-10-28 15:07:15.232350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.555 [2024-10-28 15:07:15.240354] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.555 [2024-10-28 15:07:15.240416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.555 [2024-10-28 15:07:15.248363] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.555 [2024-10-28 15:07:15.248420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.555 [2024-10-28 15:07:15.256372] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.555 [2024-10-28 15:07:15.256424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.555 [2024-10-28 15:07:15.264415] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.555 [2024-10-28 15:07:15.264473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.555 [2024-10-28 15:07:15.272428] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.555 [2024-10-28 15:07:15.272485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.555 [2024-10-28 15:07:15.280451] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.555 [2024-10-28 15:07:15.280510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.555 [2024-10-28 15:07:15.288467] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.555 [2024-10-28 15:07:15.288521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.555 [2024-10-28 15:07:15.296479] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.555 [2024-10-28 15:07:15.296531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.555 [2024-10-28 15:07:15.304514] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.555 [2024-10-28 15:07:15.304568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.555 [2024-10-28 15:07:15.312526] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.555 [2024-10-28 15:07:15.312579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.555 [2024-10-28 15:07:15.324612] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.555 [2024-10-28 15:07:15.324697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.555 [2024-10-28 15:07:15.336647] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.555 [2024-10-28 15:07:15.336732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.555 [2024-10-28 15:07:15.348698] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.555 [2024-10-28 15:07:15.348766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.555 [2024-10-28 15:07:15.360727] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.555 [2024-10-28 15:07:15.360787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.555 [2024-10-28 15:07:15.372782] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.555 [2024-10-28 15:07:15.372843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.555 [2024-10-28 15:07:15.380689] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.555 [2024-10-28 15:07:15.380718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.555 [2024-10-28 15:07:15.388768] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.555 [2024-10-28 15:07:15.388823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.555 [2024-10-28 15:07:15.396780] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.555 [2024-10-28 15:07:15.396831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.555 [2024-10-28 15:07:15.408878] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.555 [2024-10-28 15:07:15.408942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.816 [2024-10-28 15:07:15.420907] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.816 [2024-10-28 15:07:15.420974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.816 [2024-10-28 15:07:15.432949] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.816 [2024-10-28 15:07:15.433012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.816 [2024-10-28 15:07:15.444977] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.816 [2024-10-28 15:07:15.445037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.816 [2024-10-28 15:07:15.457011] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.816 [2024-10-28 15:07:15.457073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3095196) - No such process 00:11:28.816 15:07:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3095196 00:11:28.816 15:07:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.816 15:07:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.816 15:07:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:28.816 15:07:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.816 15:07:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:28.816 15:07:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.816 15:07:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:28.816 delay0 00:11:28.816 15:07:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.816 15:07:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:28.816 15:07:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.816 15:07:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:28.816 15:07:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.816 15:07:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:29.078 [2024-10-28 15:07:15.711872] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:35.659 Initializing NVMe Controllers 00:11:35.659 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:35.659 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:35.659 Initialization complete. Launching workers. 00:11:35.659 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 69 00:11:35.659 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 356, failed to submit 33 00:11:35.659 success 189, unsuccessful 167, failed 0 00:11:35.659 15:07:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:35.659 15:07:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:35.659 15:07:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:35.659 15:07:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:35.659 15:07:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:35.659 15:07:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:35.659 15:07:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:35.659 15:07:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:35.659 rmmod nvme_tcp 00:11:35.659 rmmod nvme_fabrics 00:11:35.659 rmmod nvme_keyring 00:11:35.659 15:07:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:35.659 15:07:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:35.659 15:07:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:35.659 15:07:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3093767 ']' 00:11:35.659 15:07:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3093767 00:11:35.659 15:07:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 3093767 ']' 00:11:35.659 15:07:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 3093767 00:11:35.659 15:07:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:11:35.659 15:07:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:35.659 15:07:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3093767 00:11:35.659 15:07:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:35.659 15:07:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:35.659 15:07:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3093767' 00:11:35.659 killing process with pid 3093767 00:11:35.659 15:07:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 3093767 00:11:35.659 15:07:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 3093767 00:11:35.659 15:07:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:35.659 15:07:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:35.659 15:07:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:35.659 15:07:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:35.659 15:07:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:11:35.659 15:07:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:35.659 15:07:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:11:35.659 15:07:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:35.659 15:07:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:35.659 15:07:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.659 15:07:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.659 15:07:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.569 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:37.569 00:11:37.569 real 0m29.924s 00:11:37.569 user 0m42.463s 00:11:37.569 sys 0m9.700s 00:11:37.569 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:37.569 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:37.569 ************************************ 00:11:37.569 END TEST nvmf_zcopy 00:11:37.569 ************************************ 00:11:37.569 15:07:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:37.569 15:07:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:37.569 15:07:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:37.569 15:07:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:37.828 ************************************ 00:11:37.828 START TEST nvmf_nmic 00:11:37.828 ************************************ 00:11:37.828 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:37.828 * Looking for test storage... 00:11:37.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:37.828 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:11:37.828 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1689 -- # lcov --version 00:11:37.828 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:11:37.828 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:11:37.828 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:37.828 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:37.828 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:37.828 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:37.828 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:37.828 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:37.828 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:37.828 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:37.828 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:37.828 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:37.828 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:37.828 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:37.828 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:37.828 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:37.828 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:37.828 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:37.828 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:37.828 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:37.828 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:37.828 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:37.828 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:37.828 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:37.828 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:37.828 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:37.828 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:37.828 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:11:37.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.829 --rc genhtml_branch_coverage=1 00:11:37.829 --rc genhtml_function_coverage=1 00:11:37.829 --rc genhtml_legend=1 00:11:37.829 --rc geninfo_all_blocks=1 00:11:37.829 --rc geninfo_unexecuted_blocks=1 00:11:37.829 00:11:37.829 ' 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:11:37.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.829 --rc genhtml_branch_coverage=1 00:11:37.829 --rc genhtml_function_coverage=1 00:11:37.829 --rc genhtml_legend=1 00:11:37.829 --rc geninfo_all_blocks=1 00:11:37.829 --rc geninfo_unexecuted_blocks=1 00:11:37.829 00:11:37.829 ' 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:11:37.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.829 --rc genhtml_branch_coverage=1 00:11:37.829 --rc genhtml_function_coverage=1 00:11:37.829 --rc genhtml_legend=1 00:11:37.829 --rc geninfo_all_blocks=1 00:11:37.829 --rc geninfo_unexecuted_blocks=1 00:11:37.829 00:11:37.829 ' 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:11:37.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.829 --rc genhtml_branch_coverage=1 00:11:37.829 --rc genhtml_function_coverage=1 00:11:37.829 --rc genhtml_legend=1 00:11:37.829 --rc geninfo_all_blocks=1 00:11:37.829 --rc geninfo_unexecuted_blocks=1 00:11:37.829 00:11:37.829 ' 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:37.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:11:37.829 15:07:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:41.124 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:41.124 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:41.124 Found net devices under 0000:84:00.0: cvl_0_0 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:41.124 Found net devices under 0000:84:00.1: cvl_0_1 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:41.124 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:41.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:41.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:11:41.125 00:11:41.125 --- 10.0.0.2 ping statistics --- 00:11:41.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.125 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:41.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:41.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:11:41.125 00:11:41.125 --- 10.0.0.1 ping statistics --- 00:11:41.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.125 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3098642 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3098642 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 3098642 ']' 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:41.125 15:07:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:41.125 [2024-10-28 15:07:27.938705] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:11:41.125 [2024-10-28 15:07:27.938808] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.409 [2024-10-28 15:07:28.081628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:41.409 [2024-10-28 15:07:28.198166] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.409 [2024-10-28 15:07:28.198289] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.409 [2024-10-28 15:07:28.198326] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.409 [2024-10-28 15:07:28.198357] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.409 [2024-10-28 15:07:28.198384] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.409 [2024-10-28 15:07:28.201861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.409 [2024-10-28 15:07:28.201960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.409 [2024-10-28 15:07:28.202061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.409 [2024-10-28 15:07:28.202066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:42.418 [2024-10-28 15:07:29.079635] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:42.418 Malloc0 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:42.418 [2024-10-28 15:07:29.145291] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:42.418 test case1: single bdev can't be used in multiple subsystems 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:42.418 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:42.419 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.419 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:42.419 [2024-10-28 15:07:29.169124] bdev.c:8192:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:42.419 [2024-10-28 15:07:29.169154] subsystem.c:2151:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:42.419 [2024-10-28 15:07:29.169170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.419 request: 00:11:42.419 { 00:11:42.419 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:42.419 "namespace": { 00:11:42.419 "bdev_name": "Malloc0", 00:11:42.419 "no_auto_visible": false 00:11:42.419 }, 00:11:42.419 "method": "nvmf_subsystem_add_ns", 00:11:42.419 "req_id": 1 00:11:42.419 } 00:11:42.419 Got JSON-RPC error response 00:11:42.419 response: 00:11:42.419 { 00:11:42.419 "code": -32602, 00:11:42.419 "message": "Invalid parameters" 00:11:42.419 } 00:11:42.419 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:42.419 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:42.419 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:42.419 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:42.419 Adding namespace failed - expected result. 00:11:42.419 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:42.419 test case2: host connect to nvmf target in multiple paths 00:11:42.419 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:42.419 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.419 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:42.419 [2024-10-28 15:07:29.177246] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:42.419 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.419 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:43.348 15:07:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:43.912 15:07:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:43.912 15:07:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:11:43.913 15:07:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:43.913 15:07:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:43.913 15:07:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:11:45.809 15:07:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:45.809 15:07:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:45.809 15:07:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:45.809 15:07:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:45.809 15:07:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:45.809 15:07:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:11:45.809 15:07:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:45.809 [global] 00:11:45.809 thread=1 00:11:45.809 invalidate=1 00:11:45.809 rw=write 00:11:45.809 time_based=1 00:11:45.809 runtime=1 00:11:45.809 ioengine=libaio 00:11:45.809 direct=1 00:11:45.809 bs=4096 00:11:45.809 iodepth=1 00:11:45.809 norandommap=0 00:11:45.809 numjobs=1 00:11:45.809 00:11:45.809 verify_dump=1 00:11:45.809 verify_backlog=512 00:11:45.809 verify_state_save=0 00:11:45.809 do_verify=1 00:11:45.809 verify=crc32c-intel 00:11:45.809 [job0] 00:11:45.809 filename=/dev/nvme0n1 00:11:45.809 Could not set queue depth (nvme0n1) 00:11:46.067 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:46.067 fio-3.35 00:11:46.067 Starting 1 thread 00:11:47.440 00:11:47.440 job0: (groupid=0, jobs=1): err= 0: pid=3099409: Mon Oct 28 15:07:33 2024 00:11:47.440 read: IOPS=2351, BW=9407KiB/s (9632kB/s)(9416KiB/1001msec) 00:11:47.440 slat (nsec): min=5498, max=45695, avg=9588.26, stdev=4883.01 00:11:47.440 clat (usec): min=174, max=614, avg=240.98, stdev=51.56 00:11:47.440 lat (usec): min=180, max=624, avg=250.57, stdev=52.74 00:11:47.440 clat percentiles (usec): 00:11:47.440 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 200], 20.00th=[ 208], 00:11:47.440 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 233], 00:11:47.440 | 70.00th=[ 247], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 314], 00:11:47.440 | 99.00th=[ 490], 99.50th=[ 523], 99.90th=[ 562], 99.95th=[ 562], 00:11:47.440 | 99.99th=[ 619] 00:11:47.440 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:47.440 slat (nsec): min=7286, max=38414, avg=9073.74, stdev=3353.37 00:11:47.440 clat (usec): min=120, max=402, avg=146.17, stdev=15.41 00:11:47.440 lat (usec): min=128, max=436, avg=155.24, stdev=16.11 00:11:47.440 clat percentiles (usec): 00:11:47.440 | 1.00th=[ 126], 5.00th=[ 129], 10.00th=[ 133], 20.00th=[ 135], 00:11:47.440 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 147], 00:11:47.440 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 167], 95.00th=[ 174], 00:11:47.440 | 99.00th=[ 190], 99.50th=[ 196], 99.90th=[ 243], 99.95th=[ 269], 00:11:47.440 | 99.99th=[ 404] 00:11:47.440 bw ( KiB/s): min=11928, max=11928, per=100.00%, avg=11928.00, stdev= 0.00, samples=1 00:11:47.440 iops : min= 2982, max= 2982, avg=2982.00, stdev= 0.00, samples=1 00:11:47.440 lat (usec) : 250=86.51%, 500=13.13%, 750=0.37% 00:11:47.440 cpu : usr=2.20%, sys=4.80%, ctx=4916, majf=0, minf=1 00:11:47.440 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:47.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.440 issued rwts: total=2354,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:47.440 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:47.440 00:11:47.440 Run status group 0 (all jobs): 00:11:47.440 READ: bw=9407KiB/s (9632kB/s), 9407KiB/s-9407KiB/s (9632kB/s-9632kB/s), io=9416KiB (9642kB), run=1001-1001msec 00:11:47.440 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:11:47.440 00:11:47.440 Disk stats (read/write): 00:11:47.440 nvme0n1: ios=2081/2407, merge=0/0, ticks=1465/342, in_queue=1807, util=99.00% 00:11:47.440 15:07:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:47.440 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:47.440 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:47.440 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:47.440 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:47.440 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.441 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:47.441 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.441 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:47.441 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:47.441 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:47.441 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:47.441 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:47.441 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:47.441 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:47.441 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:47.441 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:47.441 rmmod nvme_tcp 00:11:47.441 rmmod nvme_fabrics 00:11:47.441 rmmod nvme_keyring 00:11:47.441 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:47.441 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:47.441 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:47.441 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3098642 ']' 00:11:47.441 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3098642 00:11:47.441 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 3098642 ']' 00:11:47.441 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 3098642 00:11:47.441 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:11:47.441 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:47.441 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3098642 00:11:47.441 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:47.441 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:47.441 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3098642' 00:11:47.441 killing process with pid 3098642 00:11:47.441 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 3098642 00:11:47.441 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 3098642 00:11:47.700 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:47.700 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:47.700 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:47.700 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:47.700 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:47.700 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:47.700 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:47.700 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:47.700 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:47.700 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.700 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.700 15:07:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:50.244 00:11:50.244 real 0m12.154s 00:11:50.244 user 0m26.404s 00:11:50.244 sys 0m3.674s 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:50.244 ************************************ 00:11:50.244 END TEST nvmf_nmic 00:11:50.244 ************************************ 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:50.244 ************************************ 00:11:50.244 START TEST nvmf_fio_target 00:11:50.244 ************************************ 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:50.244 * Looking for test storage... 00:11:50.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1689 -- # lcov --version 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:11:50.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.244 --rc genhtml_branch_coverage=1 00:11:50.244 --rc genhtml_function_coverage=1 00:11:50.244 --rc genhtml_legend=1 00:11:50.244 --rc geninfo_all_blocks=1 00:11:50.244 --rc geninfo_unexecuted_blocks=1 00:11:50.244 00:11:50.244 ' 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:11:50.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.244 --rc genhtml_branch_coverage=1 00:11:50.244 --rc genhtml_function_coverage=1 00:11:50.244 --rc genhtml_legend=1 00:11:50.244 --rc geninfo_all_blocks=1 00:11:50.244 --rc geninfo_unexecuted_blocks=1 00:11:50.244 00:11:50.244 ' 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:11:50.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.244 --rc genhtml_branch_coverage=1 00:11:50.244 --rc genhtml_function_coverage=1 00:11:50.244 --rc genhtml_legend=1 00:11:50.244 --rc geninfo_all_blocks=1 00:11:50.244 --rc geninfo_unexecuted_blocks=1 00:11:50.244 00:11:50.244 ' 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:11:50.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.244 --rc genhtml_branch_coverage=1 00:11:50.244 --rc genhtml_function_coverage=1 00:11:50.244 --rc genhtml_legend=1 00:11:50.244 --rc geninfo_all_blocks=1 00:11:50.244 --rc geninfo_unexecuted_blocks=1 00:11:50.244 00:11:50.244 ' 00:11:50.244 15:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:50.244 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:50.244 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.244 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.244 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.244 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.244 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.244 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.244 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.244 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.244 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.244 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.244 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:50.244 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:50.244 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.244 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.244 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:50.244 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.244 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:50.244 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:50.244 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.244 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.244 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.244 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.245 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.245 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.245 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:50.245 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.245 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:50.245 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:50.245 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:50.245 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.245 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.245 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.245 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:50.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:50.245 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:50.245 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:50.245 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:50.245 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:50.245 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:50.245 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:50.245 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:50.245 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:50.245 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.245 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:50.245 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:50.245 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:50.245 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.245 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.245 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.245 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:50.245 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:50.245 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:50.245 15:07:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.534 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:53.534 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:53.534 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:53.534 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:53.534 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:53.534 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:53.534 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:53.534 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:53.534 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:53.534 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:53.534 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:53.534 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:53.534 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:53.534 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:53.534 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:53.534 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:53.534 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:53.534 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:53.534 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:53.534 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:53.534 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:53.534 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:53.534 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:53.534 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:53.535 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:53.535 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:53.535 Found net devices under 0000:84:00.0: cvl_0_0 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:53.535 Found net devices under 0000:84:00.1: cvl_0_1 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:53.535 15:07:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:53.535 15:07:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:53.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:11:53.535 00:11:53.535 --- 10.0.0.2 ping statistics --- 00:11:53.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.535 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:11:53.535 15:07:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:53.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:11:53.535 00:11:53.535 --- 10.0.0.1 ping statistics --- 00:11:53.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.535 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:11:53.535 15:07:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:53.535 15:07:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:11:53.535 15:07:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:53.535 15:07:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:53.535 15:07:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:53.535 15:07:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:53.535 15:07:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:53.535 15:07:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:53.535 15:07:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:53.535 15:07:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:53.535 15:07:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:53.535 15:07:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:53.535 15:07:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.535 15:07:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3101638 00:11:53.535 15:07:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:53.535 15:07:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3101638 00:11:53.535 15:07:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 3101638 ']' 00:11:53.535 15:07:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.535 15:07:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:53.535 15:07:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.535 15:07:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:53.535 15:07:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.535 [2024-10-28 15:07:40.135734] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:11:53.535 [2024-10-28 15:07:40.135900] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.535 [2024-10-28 15:07:40.319995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.794 [2024-10-28 15:07:40.442251] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.794 [2024-10-28 15:07:40.442356] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.794 [2024-10-28 15:07:40.442393] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.794 [2024-10-28 15:07:40.442424] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.794 [2024-10-28 15:07:40.442465] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.794 [2024-10-28 15:07:40.446169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.794 [2024-10-28 15:07:40.446279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.794 [2024-10-28 15:07:40.446376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:53.794 [2024-10-28 15:07:40.446379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.727 15:07:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:54.727 15:07:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:11:54.727 15:07:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:54.727 15:07:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:54.727 15:07:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.727 15:07:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.727 15:07:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:55.292 [2024-10-28 15:07:41.852075] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:55.292 15:07:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:55.550 15:07:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:55.550 15:07:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:56.482 15:07:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:56.482 15:07:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:57.047 15:07:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:57.047 15:07:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:57.612 15:07:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:57.612 15:07:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:57.869 15:07:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:58.801 15:07:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:58.801 15:07:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:59.366 15:07:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:59.366 15:07:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:59.932 15:07:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:59.932 15:07:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:00.190 15:07:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:01.123 15:07:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:01.123 15:07:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:01.380 15:07:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:01.380 15:07:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:01.638 15:07:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.895 [2024-10-28 15:07:48.741976] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.154 15:07:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:02.411 15:07:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:02.668 15:07:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:03.234 15:07:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:03.234 15:07:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:12:03.234 15:07:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:03.234 15:07:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:12:03.234 15:07:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:12:03.234 15:07:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:12:05.760 15:07:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:05.760 15:07:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:05.760 15:07:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:05.760 15:07:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:12:05.760 15:07:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:05.760 15:07:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:12:05.760 15:07:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:05.760 [global] 00:12:05.760 thread=1 00:12:05.760 invalidate=1 00:12:05.760 rw=write 00:12:05.760 time_based=1 00:12:05.760 runtime=1 00:12:05.760 ioengine=libaio 00:12:05.760 direct=1 00:12:05.760 bs=4096 00:12:05.760 iodepth=1 00:12:05.760 norandommap=0 00:12:05.760 numjobs=1 00:12:05.760 00:12:05.760 verify_dump=1 00:12:05.760 verify_backlog=512 00:12:05.760 verify_state_save=0 00:12:05.760 do_verify=1 00:12:05.760 verify=crc32c-intel 00:12:05.760 [job0] 00:12:05.760 filename=/dev/nvme0n1 00:12:05.760 [job1] 00:12:05.760 filename=/dev/nvme0n2 00:12:05.760 [job2] 00:12:05.760 filename=/dev/nvme0n3 00:12:05.760 [job3] 00:12:05.760 filename=/dev/nvme0n4 00:12:05.760 Could not set queue depth (nvme0n1) 00:12:05.760 Could not set queue depth (nvme0n2) 00:12:05.760 Could not set queue depth (nvme0n3) 00:12:05.760 Could not set queue depth (nvme0n4) 00:12:05.760 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:05.760 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:05.761 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:05.761 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:05.761 fio-3.35 00:12:05.761 Starting 4 threads 00:12:07.136 00:12:07.136 job0: (groupid=0, jobs=1): err= 0: pid=3103115: Mon Oct 28 15:07:53 2024 00:12:07.136 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:07.136 slat (nsec): min=7008, max=25212, avg=8685.58, stdev=1239.49 00:12:07.136 clat (usec): min=178, max=41250, avg=408.35, stdev=2074.55 00:12:07.136 lat (usec): min=185, max=41258, avg=417.03, stdev=2074.62 00:12:07.136 clat percentiles (usec): 00:12:07.136 | 1.00th=[ 188], 5.00th=[ 200], 10.00th=[ 210], 20.00th=[ 237], 00:12:07.136 | 30.00th=[ 285], 40.00th=[ 297], 50.00th=[ 306], 60.00th=[ 314], 00:12:07.136 | 70.00th=[ 322], 80.00th=[ 338], 90.00th=[ 392], 95.00th=[ 412], 00:12:07.136 | 99.00th=[ 469], 99.50th=[ 502], 99.90th=[41157], 99.95th=[41157], 00:12:07.136 | 99.99th=[41157] 00:12:07.136 write: IOPS=1556, BW=6226KiB/s (6375kB/s)(6232KiB/1001msec); 0 zone resets 00:12:07.136 slat (nsec): min=9224, max=30045, avg=11083.29, stdev=1335.47 00:12:07.136 clat (usec): min=136, max=1001, avg=213.97, stdev=50.70 00:12:07.136 lat (usec): min=147, max=1026, avg=225.05, stdev=50.98 00:12:07.136 clat percentiles (usec): 00:12:07.136 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 172], 00:12:07.136 | 30.00th=[ 200], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 219], 00:12:07.136 | 70.00th=[ 229], 80.00th=[ 241], 90.00th=[ 258], 95.00th=[ 281], 00:12:07.136 | 99.00th=[ 396], 99.50th=[ 404], 99.90th=[ 734], 99.95th=[ 1004], 00:12:07.136 | 99.99th=[ 1004] 00:12:07.136 bw ( KiB/s): min= 8192, max= 8192, per=32.25%, avg=8192.00, stdev= 0.00, samples=1 00:12:07.136 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:07.136 lat (usec) : 250=54.04%, 500=45.64%, 750=0.16% 00:12:07.136 lat (msec) : 2=0.03%, 50=0.13% 00:12:07.136 cpu : usr=2.40%, sys=3.90%, ctx=3096, majf=0, minf=1 00:12:07.136 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:07.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.136 issued rwts: total=1536,1558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.136 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:07.136 job1: (groupid=0, jobs=1): err= 0: pid=3103118: Mon Oct 28 15:07:53 2024 00:12:07.136 read: IOPS=23, BW=95.5KiB/s (97.8kB/s)(96.0KiB/1005msec) 00:12:07.136 slat (nsec): min=8606, max=15703, avg=13230.79, stdev=1954.12 00:12:07.136 clat (usec): min=252, max=42039, avg=38169.08, stdev=9941.13 00:12:07.136 lat (usec): min=266, max=42051, avg=38182.31, stdev=9941.30 00:12:07.136 clat percentiles (usec): 00:12:07.136 | 1.00th=[ 253], 5.00th=[12649], 10.00th=[40633], 20.00th=[41157], 00:12:07.136 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:07.136 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:12:07.136 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:07.136 | 99.99th=[42206] 00:12:07.136 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:12:07.136 slat (nsec): min=6767, max=23279, avg=9909.21, stdev=1324.38 00:12:07.136 clat (usec): min=141, max=313, avg=159.61, stdev=11.77 00:12:07.136 lat (usec): min=150, max=337, avg=169.52, stdev=12.33 00:12:07.136 clat percentiles (usec): 00:12:07.136 | 1.00th=[ 145], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 153], 00:12:07.136 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161], 00:12:07.136 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 172], 95.00th=[ 176], 00:12:07.136 | 99.00th=[ 194], 99.50th=[ 219], 99.90th=[ 314], 99.95th=[ 314], 00:12:07.136 | 99.99th=[ 314] 00:12:07.136 bw ( KiB/s): min= 4096, max= 4096, per=16.13%, avg=4096.00, stdev= 0.00, samples=1 00:12:07.136 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:07.136 lat (usec) : 250=95.34%, 500=0.37% 00:12:07.136 lat (msec) : 20=0.19%, 50=4.10% 00:12:07.136 cpu : usr=0.30%, sys=0.40%, ctx=536, majf=0, minf=2 00:12:07.136 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:07.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.136 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.136 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:07.136 job2: (groupid=0, jobs=1): err= 0: pid=3103126: Mon Oct 28 15:07:53 2024 00:12:07.136 read: IOPS=1758, BW=7036KiB/s (7205kB/s)(7296KiB/1037msec) 00:12:07.136 slat (nsec): min=5762, max=27623, avg=7763.60, stdev=1969.56 00:12:07.136 clat (usec): min=196, max=40548, avg=326.05, stdev=944.16 00:12:07.136 lat (usec): min=204, max=40557, avg=333.81, stdev=944.16 00:12:07.136 clat percentiles (usec): 00:12:07.136 | 1.00th=[ 208], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 260], 00:12:07.136 | 30.00th=[ 281], 40.00th=[ 293], 50.00th=[ 297], 60.00th=[ 306], 00:12:07.136 | 70.00th=[ 322], 80.00th=[ 343], 90.00th=[ 379], 95.00th=[ 408], 00:12:07.136 | 99.00th=[ 498], 99.50th=[ 570], 99.90th=[ 611], 99.95th=[40633], 00:12:07.136 | 99.99th=[40633] 00:12:07.136 write: IOPS=1974, BW=7900KiB/s (8089kB/s)(8192KiB/1037msec); 0 zone resets 00:12:07.136 slat (nsec): min=7728, max=38329, avg=9938.59, stdev=2138.16 00:12:07.136 clat (usec): min=138, max=397, avg=194.46, stdev=36.79 00:12:07.136 lat (usec): min=148, max=425, avg=204.40, stdev=36.46 00:12:07.136 clat percentiles (usec): 00:12:07.136 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:12:07.136 | 30.00th=[ 163], 40.00th=[ 174], 50.00th=[ 198], 60.00th=[ 210], 00:12:07.136 | 70.00th=[ 217], 80.00th=[ 227], 90.00th=[ 239], 95.00th=[ 251], 00:12:07.136 | 99.00th=[ 302], 99.50th=[ 318], 99.90th=[ 343], 99.95th=[ 388], 00:12:07.136 | 99.99th=[ 400] 00:12:07.136 bw ( KiB/s): min= 8192, max= 8192, per=32.25%, avg=8192.00, stdev= 0.00, samples=2 00:12:07.136 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:12:07.136 lat (usec) : 250=59.12%, 500=40.42%, 750=0.44% 00:12:07.136 lat (msec) : 50=0.03% 00:12:07.136 cpu : usr=1.16%, sys=3.86%, ctx=3873, majf=0, minf=1 00:12:07.136 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:07.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.136 issued rwts: total=1824,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.136 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:07.137 job3: (groupid=0, jobs=1): err= 0: pid=3103132: Mon Oct 28 15:07:53 2024 00:12:07.137 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:12:07.137 slat (nsec): min=4973, max=32686, avg=10138.32, stdev=4620.59 00:12:07.137 clat (usec): min=198, max=1215, avg=268.43, stdev=78.86 00:12:07.137 lat (usec): min=204, max=1222, avg=278.57, stdev=80.23 00:12:07.137 clat percentiles (usec): 00:12:07.137 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 221], 00:12:07.137 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 243], 60.00th=[ 269], 00:12:07.137 | 70.00th=[ 277], 80.00th=[ 293], 90.00th=[ 338], 95.00th=[ 412], 00:12:07.137 | 99.00th=[ 553], 99.50th=[ 611], 99.90th=[ 1156], 99.95th=[ 1188], 00:12:07.137 | 99.99th=[ 1221] 00:12:07.137 write: IOPS=2464, BW=9858KiB/s (10.1MB/s)(9868KiB/1001msec); 0 zone resets 00:12:07.137 slat (nsec): min=6319, max=42039, avg=7574.96, stdev=1361.52 00:12:07.137 clat (usec): min=132, max=980, avg=161.95, stdev=20.52 00:12:07.137 lat (usec): min=139, max=988, avg=169.52, stdev=20.79 00:12:07.137 clat percentiles (usec): 00:12:07.137 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:12:07.137 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:12:07.137 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 178], 95.00th=[ 184], 00:12:07.137 | 99.00th=[ 196], 99.50th=[ 200], 99.90th=[ 237], 99.95th=[ 330], 00:12:07.137 | 99.99th=[ 979] 00:12:07.137 bw ( KiB/s): min= 8728, max= 8728, per=34.36%, avg=8728.00, stdev= 0.00, samples=1 00:12:07.137 iops : min= 2182, max= 2182, avg=2182.00, stdev= 0.00, samples=1 00:12:07.137 lat (usec) : 250=79.29%, 500=19.84%, 750=0.66%, 1000=0.11% 00:12:07.137 lat (msec) : 2=0.09% 00:12:07.137 cpu : usr=2.00%, sys=4.20%, ctx=4516, majf=0, minf=1 00:12:07.137 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:07.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.137 issued rwts: total=2048,2467,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.137 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:07.137 00:12:07.137 Run status group 0 (all jobs): 00:12:07.137 READ: bw=20.5MiB/s (21.5MB/s), 95.5KiB/s-8184KiB/s (97.8kB/s-8380kB/s), io=21.2MiB (22.2MB), run=1001-1037msec 00:12:07.137 WRITE: bw=24.8MiB/s (26.0MB/s), 2038KiB/s-9858KiB/s (2087kB/s-10.1MB/s), io=25.7MiB (27.0MB), run=1001-1037msec 00:12:07.137 00:12:07.137 Disk stats (read/write): 00:12:07.137 nvme0n1: ios=1455/1536, merge=0/0, ticks=701/318, in_queue=1019, util=84.87% 00:12:07.137 nvme0n2: ios=69/512, merge=0/0, ticks=806/80, in_queue=886, util=89.91% 00:12:07.137 nvme0n3: ios=1559/1559, merge=0/0, ticks=1390/310, in_queue=1700, util=92.86% 00:12:07.137 nvme0n4: ios=1803/2048, merge=0/0, ticks=525/329, in_queue=854, util=95.55% 00:12:07.137 15:07:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:07.137 [global] 00:12:07.137 thread=1 00:12:07.137 invalidate=1 00:12:07.137 rw=randwrite 00:12:07.137 time_based=1 00:12:07.137 runtime=1 00:12:07.137 ioengine=libaio 00:12:07.137 direct=1 00:12:07.137 bs=4096 00:12:07.137 iodepth=1 00:12:07.137 norandommap=0 00:12:07.137 numjobs=1 00:12:07.137 00:12:07.137 verify_dump=1 00:12:07.137 verify_backlog=512 00:12:07.137 verify_state_save=0 00:12:07.137 do_verify=1 00:12:07.137 verify=crc32c-intel 00:12:07.137 [job0] 00:12:07.137 filename=/dev/nvme0n1 00:12:07.137 [job1] 00:12:07.137 filename=/dev/nvme0n2 00:12:07.137 [job2] 00:12:07.137 filename=/dev/nvme0n3 00:12:07.137 [job3] 00:12:07.137 filename=/dev/nvme0n4 00:12:07.137 Could not set queue depth (nvme0n1) 00:12:07.137 Could not set queue depth (nvme0n2) 00:12:07.137 Could not set queue depth (nvme0n3) 00:12:07.137 Could not set queue depth (nvme0n4) 00:12:07.137 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:07.137 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:07.137 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:07.137 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:07.137 fio-3.35 00:12:07.137 Starting 4 threads 00:12:08.511 00:12:08.511 job0: (groupid=0, jobs=1): err= 0: pid=3103469: Mon Oct 28 15:07:55 2024 00:12:08.511 read: IOPS=24, BW=99.8KiB/s (102kB/s)(104KiB/1042msec) 00:12:08.511 slat (nsec): min=7788, max=28528, avg=13451.00, stdev=4230.00 00:12:08.511 clat (usec): min=476, max=42088, avg=36339.32, stdev=13196.09 00:12:08.511 lat (usec): min=483, max=42099, avg=36352.77, stdev=13197.85 00:12:08.511 clat percentiles (usec): 00:12:08.511 | 1.00th=[ 478], 5.00th=[ 510], 10.00th=[ 562], 20.00th=[40633], 00:12:08.511 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:08.511 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:12:08.511 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:08.511 | 99.99th=[42206] 00:12:08.511 write: IOPS=491, BW=1965KiB/s (2013kB/s)(2048KiB/1042msec); 0 zone resets 00:12:08.511 slat (nsec): min=6640, max=36136, avg=10375.17, stdev=2516.82 00:12:08.511 clat (usec): min=142, max=426, avg=174.10, stdev=50.14 00:12:08.511 lat (usec): min=152, max=458, avg=184.47, stdev=50.76 00:12:08.511 clat percentiles (usec): 00:12:08.511 | 1.00th=[ 145], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:12:08.511 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161], 00:12:08.511 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 235], 95.00th=[ 310], 00:12:08.511 | 99.00th=[ 396], 99.50th=[ 408], 99.90th=[ 429], 99.95th=[ 429], 00:12:08.511 | 99.99th=[ 429] 00:12:08.512 bw ( KiB/s): min= 4096, max= 4096, per=20.84%, avg=4096.00, stdev= 0.00, samples=1 00:12:08.512 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:08.512 lat (usec) : 250=86.25%, 500=9.11%, 750=0.37% 00:12:08.512 lat (msec) : 50=4.28% 00:12:08.512 cpu : usr=0.29%, sys=0.48%, ctx=539, majf=0, minf=1 00:12:08.512 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:08.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.512 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:08.512 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:08.512 job1: (groupid=0, jobs=1): err= 0: pid=3103470: Mon Oct 28 15:07:55 2024 00:12:08.512 read: IOPS=21, BW=85.4KiB/s (87.4kB/s)(88.0KiB/1031msec) 00:12:08.512 slat (nsec): min=9831, max=16782, avg=13598.86, stdev=2300.58 00:12:08.512 clat (usec): min=40885, max=41332, avg=40997.08, stdev=84.73 00:12:08.512 lat (usec): min=40896, max=41342, avg=41010.68, stdev=83.97 00:12:08.512 clat percentiles (usec): 00:12:08.512 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:12:08.512 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:08.512 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:12:08.512 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:08.512 | 99.99th=[41157] 00:12:08.512 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:12:08.512 slat (nsec): min=10736, max=33972, avg=12365.79, stdev=2226.49 00:12:08.512 clat (usec): min=169, max=389, avg=233.96, stdev=31.36 00:12:08.512 lat (usec): min=182, max=403, avg=246.33, stdev=31.70 00:12:08.512 clat percentiles (usec): 00:12:08.512 | 1.00th=[ 188], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 215], 00:12:08.512 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 233], 00:12:08.512 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 273], 95.00th=[ 306], 00:12:08.512 | 99.00th=[ 355], 99.50th=[ 363], 99.90th=[ 392], 99.95th=[ 392], 00:12:08.512 | 99.99th=[ 392] 00:12:08.512 bw ( KiB/s): min= 4096, max= 4096, per=20.84%, avg=4096.00, stdev= 0.00, samples=1 00:12:08.512 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:08.512 lat (usec) : 250=82.77%, 500=13.11% 00:12:08.512 lat (msec) : 50=4.12% 00:12:08.512 cpu : usr=0.10%, sys=0.78%, ctx=535, majf=0, minf=1 00:12:08.512 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:08.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.512 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:08.512 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:08.512 job2: (groupid=0, jobs=1): err= 0: pid=3103471: Mon Oct 28 15:07:55 2024 00:12:08.512 read: IOPS=1015, BW=4063KiB/s (4160kB/s)(4136KiB/1018msec) 00:12:08.512 slat (nsec): min=5040, max=41106, avg=12900.05, stdev=4981.26 00:12:08.512 clat (usec): min=204, max=41329, avg=664.39, stdev=3560.39 00:12:08.512 lat (usec): min=210, max=41337, avg=677.29, stdev=3560.60 00:12:08.512 clat percentiles (usec): 00:12:08.512 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 241], 00:12:08.512 | 30.00th=[ 265], 40.00th=[ 281], 50.00th=[ 367], 60.00th=[ 433], 00:12:08.512 | 70.00th=[ 441], 80.00th=[ 449], 90.00th=[ 461], 95.00th=[ 469], 00:12:08.512 | 99.00th=[ 506], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:08.512 | 99.99th=[41157] 00:12:08.512 write: IOPS=1508, BW=6035KiB/s (6180kB/s)(6144KiB/1018msec); 0 zone resets 00:12:08.512 slat (nsec): min=6969, max=55818, avg=9597.31, stdev=3500.09 00:12:08.512 clat (usec): min=138, max=467, avg=191.70, stdev=44.53 00:12:08.512 lat (usec): min=145, max=490, avg=201.30, stdev=46.06 00:12:08.512 clat percentiles (usec): 00:12:08.512 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 157], 00:12:08.512 | 30.00th=[ 161], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 190], 00:12:08.512 | 70.00th=[ 215], 80.00th=[ 229], 90.00th=[ 239], 95.00th=[ 262], 00:12:08.512 | 99.00th=[ 392], 99.50th=[ 416], 99.90th=[ 457], 99.95th=[ 469], 00:12:08.512 | 99.99th=[ 469] 00:12:08.512 bw ( KiB/s): min= 4096, max= 8192, per=31.26%, avg=6144.00, stdev=2896.31, samples=2 00:12:08.512 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:12:08.512 lat (usec) : 250=65.84%, 500=33.74%, 750=0.12% 00:12:08.512 lat (msec) : 50=0.31% 00:12:08.512 cpu : usr=1.08%, sys=3.15%, ctx=2571, majf=0, minf=1 00:12:08.512 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:08.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.512 issued rwts: total=1034,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:08.512 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:08.512 job3: (groupid=0, jobs=1): err= 0: pid=3103472: Mon Oct 28 15:07:55 2024 00:12:08.512 read: IOPS=2172, BW=8691KiB/s (8900kB/s)(8700KiB/1001msec) 00:12:08.512 slat (nsec): min=6992, max=38705, avg=8143.33, stdev=1786.24 00:12:08.512 clat (usec): min=181, max=1113, avg=233.11, stdev=48.17 00:12:08.512 lat (usec): min=189, max=1123, avg=241.25, stdev=48.48 00:12:08.512 clat percentiles (usec): 00:12:08.512 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 212], 00:12:08.512 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 231], 00:12:08.512 | 70.00th=[ 237], 80.00th=[ 245], 90.00th=[ 258], 95.00th=[ 269], 00:12:08.512 | 99.00th=[ 498], 99.50th=[ 545], 99.90th=[ 799], 99.95th=[ 881], 00:12:08.512 | 99.99th=[ 1106] 00:12:08.512 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:08.512 slat (nsec): min=8922, max=48763, avg=10324.25, stdev=2456.19 00:12:08.512 clat (usec): min=132, max=3269, avg=170.76, stdev=66.67 00:12:08.512 lat (usec): min=141, max=3278, avg=181.08, stdev=66.79 00:12:08.512 clat percentiles (usec): 00:12:08.512 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:12:08.512 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 172], 00:12:08.512 | 70.00th=[ 178], 80.00th=[ 190], 90.00th=[ 202], 95.00th=[ 212], 00:12:08.512 | 99.00th=[ 229], 99.50th=[ 249], 99.90th=[ 498], 99.95th=[ 562], 00:12:08.512 | 99.99th=[ 3261] 00:12:08.512 bw ( KiB/s): min=10344, max=10344, per=52.63%, avg=10344.00, stdev= 0.00, samples=1 00:12:08.512 iops : min= 2586, max= 2586, avg=2586.00, stdev= 0.00, samples=1 00:12:08.512 lat (usec) : 250=92.90%, 500=6.63%, 750=0.36%, 1000=0.06% 00:12:08.512 lat (msec) : 2=0.02%, 4=0.02% 00:12:08.512 cpu : usr=2.20%, sys=6.80%, ctx=4737, majf=0, minf=1 00:12:08.512 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:08.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.512 issued rwts: total=2175,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:08.512 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:08.512 00:12:08.512 Run status group 0 (all jobs): 00:12:08.512 READ: bw=12.2MiB/s (12.8MB/s), 85.4KiB/s-8691KiB/s (87.4kB/s-8900kB/s), io=12.7MiB (13.3MB), run=1001-1042msec 00:12:08.512 WRITE: bw=19.2MiB/s (20.1MB/s), 1965KiB/s-9.99MiB/s (2013kB/s-10.5MB/s), io=20.0MiB (21.0MB), run=1001-1042msec 00:12:08.512 00:12:08.512 Disk stats (read/write): 00:12:08.512 nvme0n1: ios=70/512, merge=0/0, ticks=723/91, in_queue=814, util=84.37% 00:12:08.512 nvme0n2: ios=55/512, merge=0/0, ticks=1656/121, in_queue=1777, util=97.64% 00:12:08.512 nvme0n3: ios=1050/1536, merge=0/0, ticks=1387/290, in_queue=1677, util=96.36% 00:12:08.512 nvme0n4: ios=1849/2048, merge=0/0, ticks=1340/354, in_queue=1694, util=96.98% 00:12:08.512 15:07:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:08.512 [global] 00:12:08.512 thread=1 00:12:08.512 invalidate=1 00:12:08.512 rw=write 00:12:08.512 time_based=1 00:12:08.512 runtime=1 00:12:08.512 ioengine=libaio 00:12:08.512 direct=1 00:12:08.512 bs=4096 00:12:08.512 iodepth=128 00:12:08.512 norandommap=0 00:12:08.512 numjobs=1 00:12:08.512 00:12:08.512 verify_dump=1 00:12:08.512 verify_backlog=512 00:12:08.512 verify_state_save=0 00:12:08.512 do_verify=1 00:12:08.512 verify=crc32c-intel 00:12:08.512 [job0] 00:12:08.512 filename=/dev/nvme0n1 00:12:08.512 [job1] 00:12:08.512 filename=/dev/nvme0n2 00:12:08.512 [job2] 00:12:08.512 filename=/dev/nvme0n3 00:12:08.512 [job3] 00:12:08.512 filename=/dev/nvme0n4 00:12:08.512 Could not set queue depth (nvme0n1) 00:12:08.512 Could not set queue depth (nvme0n2) 00:12:08.512 Could not set queue depth (nvme0n3) 00:12:08.512 Could not set queue depth (nvme0n4) 00:12:08.770 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:08.770 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:08.770 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:08.770 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:08.770 fio-3.35 00:12:08.770 Starting 4 threads 00:12:10.144 00:12:10.144 job0: (groupid=0, jobs=1): err= 0: pid=3103702: Mon Oct 28 15:07:56 2024 00:12:10.144 read: IOPS=4570, BW=17.9MiB/s (18.7MB/s)(17.9MiB/1005msec) 00:12:10.144 slat (usec): min=3, max=6934, avg=105.31, stdev=563.38 00:12:10.144 clat (usec): min=3589, max=28875, avg=12991.90, stdev=4192.83 00:12:10.144 lat (usec): min=6201, max=30505, avg=13097.21, stdev=4238.94 00:12:10.144 clat percentiles (usec): 00:12:10.144 | 1.00th=[ 7046], 5.00th=[ 8029], 10.00th=[ 8717], 20.00th=[ 9634], 00:12:10.144 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11994], 60.00th=[13173], 00:12:10.144 | 70.00th=[14877], 80.00th=[15926], 90.00th=[18744], 95.00th=[21627], 00:12:10.144 | 99.00th=[26084], 99.50th=[27919], 99.90th=[28705], 99.95th=[28705], 00:12:10.144 | 99.99th=[28967] 00:12:10.144 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:12:10.144 slat (usec): min=4, max=5320, avg=104.34, stdev=395.47 00:12:10.144 clat (usec): min=5593, max=37326, avg=14642.59, stdev=6723.21 00:12:10.144 lat (usec): min=5604, max=37334, avg=14746.93, stdev=6767.87 00:12:10.144 clat percentiles (usec): 00:12:10.144 | 1.00th=[ 7242], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9634], 00:12:10.144 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11207], 60.00th=[12125], 00:12:10.144 | 70.00th=[17957], 80.00th=[20317], 90.00th=[23725], 95.00th=[30016], 00:12:10.144 | 99.00th=[36439], 99.50th=[36439], 99.90th=[37487], 99.95th=[37487], 00:12:10.144 | 99.99th=[37487] 00:12:10.144 bw ( KiB/s): min=14984, max=21880, per=31.41%, avg=18432.00, stdev=4876.21, samples=2 00:12:10.144 iops : min= 3746, max= 5470, avg=4608.00, stdev=1219.05, samples=2 00:12:10.144 lat (msec) : 4=0.01%, 10=22.58%, 20=62.61%, 50=14.79% 00:12:10.144 cpu : usr=4.58%, sys=7.47%, ctx=664, majf=0, minf=1 00:12:10.144 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:12:10.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:10.144 issued rwts: total=4593,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:10.145 job1: (groupid=0, jobs=1): err= 0: pid=3103704: Mon Oct 28 15:07:56 2024 00:12:10.145 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:12:10.145 slat (usec): min=2, max=27844, avg=197.08, stdev=1653.77 00:12:10.145 clat (usec): min=4419, max=75103, avg=25101.54, stdev=14299.78 00:12:10.145 lat (usec): min=4430, max=75117, avg=25298.62, stdev=14419.53 00:12:10.145 clat percentiles (usec): 00:12:10.145 | 1.00th=[ 7439], 5.00th=[10290], 10.00th=[10552], 20.00th=[11600], 00:12:10.145 | 30.00th=[12911], 40.00th=[15926], 50.00th=[22938], 60.00th=[27395], 00:12:10.145 | 70.00th=[30802], 80.00th=[38536], 90.00th=[49021], 95.00th=[53740], 00:12:10.145 | 99.00th=[58459], 99.50th=[65799], 99.90th=[65799], 99.95th=[69731], 00:12:10.145 | 99.99th=[74974] 00:12:10.145 write: IOPS=2957, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1004msec); 0 zone resets 00:12:10.145 slat (usec): min=3, max=14269, avg=163.52, stdev=929.48 00:12:10.145 clat (usec): min=548, max=114139, avg=19806.56, stdev=15596.72 00:12:10.145 lat (msec): min=5, max=114, avg=19.97, stdev=15.70 00:12:10.145 clat percentiles (msec): 00:12:10.145 | 1.00th=[ 6], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:12:10.145 | 30.00th=[ 12], 40.00th=[ 15], 50.00th=[ 17], 60.00th=[ 20], 00:12:10.145 | 70.00th=[ 22], 80.00th=[ 24], 90.00th=[ 26], 95.00th=[ 34], 00:12:10.145 | 99.00th=[ 111], 99.50th=[ 113], 99.90th=[ 114], 99.95th=[ 114], 00:12:10.145 | 99.99th=[ 114] 00:12:10.145 bw ( KiB/s): min= 9168, max=13560, per=19.36%, avg=11364.00, stdev=3105.61, samples=2 00:12:10.145 iops : min= 2292, max= 3390, avg=2841.00, stdev=776.40, samples=2 00:12:10.145 lat (usec) : 750=0.02% 00:12:10.145 lat (msec) : 10=5.25%, 20=48.87%, 50=40.93%, 100=3.94%, 250=0.99% 00:12:10.145 cpu : usr=1.89%, sys=2.09%, ctx=264, majf=0, minf=1 00:12:10.145 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:12:10.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:10.145 issued rwts: total=2560,2969,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:10.145 job2: (groupid=0, jobs=1): err= 0: pid=3103705: Mon Oct 28 15:07:56 2024 00:12:10.145 read: IOPS=3432, BW=13.4MiB/s (14.1MB/s)(13.5MiB/1005msec) 00:12:10.145 slat (usec): min=3, max=15645, avg=144.09, stdev=960.91 00:12:10.145 clat (usec): min=2928, max=48789, avg=16768.68, stdev=6551.26 00:12:10.145 lat (usec): min=4264, max=48797, avg=16912.77, stdev=6622.72 00:12:10.145 clat percentiles (usec): 00:12:10.145 | 1.00th=[ 7635], 5.00th=[ 9110], 10.00th=[11469], 20.00th=[13698], 00:12:10.145 | 30.00th=[14353], 40.00th=[14746], 50.00th=[15008], 60.00th=[15401], 00:12:10.145 | 70.00th=[16581], 80.00th=[18482], 90.00th=[23462], 95.00th=[31851], 00:12:10.145 | 99.00th=[43779], 99.50th=[46400], 99.90th=[49021], 99.95th=[49021], 00:12:10.145 | 99.99th=[49021] 00:12:10.145 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:12:10.145 slat (usec): min=5, max=11135, avg=133.38, stdev=670.84 00:12:10.145 clat (usec): min=3222, max=48779, avg=19415.24, stdev=8937.76 00:12:10.145 lat (usec): min=3230, max=48788, avg=19548.62, stdev=8997.13 00:12:10.145 clat percentiles (usec): 00:12:10.145 | 1.00th=[ 4359], 5.00th=[10028], 10.00th=[11338], 20.00th=[12649], 00:12:10.145 | 30.00th=[13042], 40.00th=[14484], 50.00th=[18482], 60.00th=[20841], 00:12:10.145 | 70.00th=[21627], 80.00th=[23987], 90.00th=[30802], 95.00th=[40633], 00:12:10.145 | 99.00th=[46924], 99.50th=[47973], 99.90th=[48497], 99.95th=[49021], 00:12:10.145 | 99.99th=[49021] 00:12:10.145 bw ( KiB/s): min=13392, max=15310, per=24.45%, avg=14351.00, stdev=1356.23, samples=2 00:12:10.145 iops : min= 3348, max= 3827, avg=3587.50, stdev=338.70, samples=2 00:12:10.145 lat (msec) : 4=0.27%, 10=4.86%, 20=63.31%, 50=31.56% 00:12:10.145 cpu : usr=3.88%, sys=4.28%, ctx=389, majf=0, minf=1 00:12:10.145 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:10.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:10.145 issued rwts: total=3450,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:10.145 job3: (groupid=0, jobs=1): err= 0: pid=3103706: Mon Oct 28 15:07:56 2024 00:12:10.145 read: IOPS=3265, BW=12.8MiB/s (13.4MB/s)(12.8MiB/1005msec) 00:12:10.145 slat (usec): min=2, max=23781, avg=154.39, stdev=1012.90 00:12:10.145 clat (usec): min=1026, max=67529, avg=18713.03, stdev=9855.02 00:12:10.145 lat (usec): min=9600, max=67536, avg=18867.42, stdev=9916.51 00:12:10.145 clat percentiles (usec): 00:12:10.145 | 1.00th=[10028], 5.00th=[11600], 10.00th=[12387], 20.00th=[12649], 00:12:10.145 | 30.00th=[12911], 40.00th=[13566], 50.00th=[15533], 60.00th=[17433], 00:12:10.145 | 70.00th=[21365], 80.00th=[22152], 90.00th=[26870], 95.00th=[36439], 00:12:10.145 | 99.00th=[67634], 99.50th=[67634], 99.90th=[67634], 99.95th=[67634], 00:12:10.145 | 99.99th=[67634] 00:12:10.145 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:12:10.145 slat (usec): min=3, max=13491, avg=133.57, stdev=731.75 00:12:10.145 clat (usec): min=7251, max=87074, avg=18267.97, stdev=12402.44 00:12:10.145 lat (usec): min=7256, max=87090, avg=18401.54, stdev=12469.34 00:12:10.145 clat percentiles (usec): 00:12:10.145 | 1.00th=[ 9503], 5.00th=[10290], 10.00th=[12256], 20.00th=[12518], 00:12:10.145 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13566], 60.00th=[13829], 00:12:10.145 | 70.00th=[18744], 80.00th=[20317], 90.00th=[25560], 95.00th=[44827], 00:12:10.145 | 99.00th=[80217], 99.50th=[83362], 99.90th=[87557], 99.95th=[87557], 00:12:10.145 | 99.99th=[87557] 00:12:10.145 bw ( KiB/s): min=12288, max=16384, per=24.43%, avg=14336.00, stdev=2896.31, samples=2 00:12:10.145 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:12:10.145 lat (msec) : 2=0.01%, 10=1.98%, 20=70.06%, 50=24.72%, 100=3.23% 00:12:10.145 cpu : usr=2.49%, sys=2.89%, ctx=309, majf=0, minf=1 00:12:10.145 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:10.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:10.145 issued rwts: total=3282,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:10.145 00:12:10.145 Run status group 0 (all jobs): 00:12:10.145 READ: bw=54.0MiB/s (56.6MB/s), 9.96MiB/s-17.9MiB/s (10.4MB/s-18.7MB/s), io=54.2MiB (56.9MB), run=1004-1005msec 00:12:10.145 WRITE: bw=57.3MiB/s (60.1MB/s), 11.6MiB/s-17.9MiB/s (12.1MB/s-18.8MB/s), io=57.6MiB (60.4MB), run=1004-1005msec 00:12:10.145 00:12:10.145 Disk stats (read/write): 00:12:10.145 nvme0n1: ios=4000/4096, merge=0/0, ticks=25035/26862, in_queue=51897, util=86.17% 00:12:10.145 nvme0n2: ios=1804/2048, merge=0/0, ticks=24526/19989, in_queue=44515, util=86.67% 00:12:10.145 nvme0n3: ios=2931/3072, merge=0/0, ticks=48265/55612, in_queue=103877, util=88.67% 00:12:10.145 nvme0n4: ios=2938/3072, merge=0/0, ticks=14845/15253, in_queue=30098, util=89.62% 00:12:10.145 15:07:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:10.145 [global] 00:12:10.145 thread=1 00:12:10.145 invalidate=1 00:12:10.145 rw=randwrite 00:12:10.145 time_based=1 00:12:10.145 runtime=1 00:12:10.145 ioengine=libaio 00:12:10.145 direct=1 00:12:10.145 bs=4096 00:12:10.145 iodepth=128 00:12:10.145 norandommap=0 00:12:10.145 numjobs=1 00:12:10.145 00:12:10.145 verify_dump=1 00:12:10.145 verify_backlog=512 00:12:10.145 verify_state_save=0 00:12:10.145 do_verify=1 00:12:10.145 verify=crc32c-intel 00:12:10.145 [job0] 00:12:10.145 filename=/dev/nvme0n1 00:12:10.145 [job1] 00:12:10.145 filename=/dev/nvme0n2 00:12:10.145 [job2] 00:12:10.145 filename=/dev/nvme0n3 00:12:10.145 [job3] 00:12:10.145 filename=/dev/nvme0n4 00:12:10.145 Could not set queue depth (nvme0n1) 00:12:10.145 Could not set queue depth (nvme0n2) 00:12:10.145 Could not set queue depth (nvme0n3) 00:12:10.145 Could not set queue depth (nvme0n4) 00:12:10.145 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:10.145 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:10.145 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:10.145 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:10.145 fio-3.35 00:12:10.145 Starting 4 threads 00:12:11.519 00:12:11.519 job0: (groupid=0, jobs=1): err= 0: pid=3103932: Mon Oct 28 15:07:58 2024 00:12:11.519 read: IOPS=5460, BW=21.3MiB/s (22.4MB/s)(21.5MiB/1007msec) 00:12:11.519 slat (usec): min=3, max=10031, avg=93.27, stdev=642.07 00:12:11.519 clat (usec): min=2656, max=23454, avg=11609.79, stdev=2926.93 00:12:11.519 lat (usec): min=3472, max=23468, avg=11703.06, stdev=2968.71 00:12:11.519 clat percentiles (usec): 00:12:11.519 | 1.00th=[ 5080], 5.00th=[ 7439], 10.00th=[ 9372], 20.00th=[ 9765], 00:12:11.519 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[11076], 00:12:11.519 | 70.00th=[12649], 80.00th=[13960], 90.00th=[15795], 95.00th=[17695], 00:12:11.519 | 99.00th=[19530], 99.50th=[20055], 99.90th=[21627], 99.95th=[21627], 00:12:11.519 | 99.99th=[23462] 00:12:11.519 write: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec); 0 zone resets 00:12:11.519 slat (usec): min=4, max=16026, avg=79.49, stdev=479.35 00:12:11.519 clat (usec): min=1151, max=40302, avg=11344.18, stdev=4613.74 00:12:11.519 lat (usec): min=1159, max=40338, avg=11423.67, stdev=4648.14 00:12:11.519 clat percentiles (usec): 00:12:11.519 | 1.00th=[ 3523], 5.00th=[ 5735], 10.00th=[ 7898], 20.00th=[ 9765], 00:12:11.519 | 30.00th=[10290], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:12:11.519 | 70.00th=[11076], 80.00th=[11338], 90.00th=[13304], 95.00th=[21890], 00:12:11.519 | 99.00th=[33424], 99.50th=[38011], 99.90th=[38011], 99.95th=[38011], 00:12:11.519 | 99.99th=[40109] 00:12:11.519 bw ( KiB/s): min=20480, max=24576, per=32.70%, avg=22528.00, stdev=2896.31, samples=2 00:12:11.519 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:12:11.519 lat (msec) : 2=0.10%, 4=1.03%, 10=26.76%, 20=68.73%, 50=3.38% 00:12:11.519 cpu : usr=5.77%, sys=8.05%, ctx=628, majf=0, minf=1 00:12:11.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:12:11.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:11.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:11.519 issued rwts: total=5499,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:11.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:11.519 job1: (groupid=0, jobs=1): err= 0: pid=3103933: Mon Oct 28 15:07:58 2024 00:12:11.519 read: IOPS=3775, BW=14.7MiB/s (15.5MB/s)(14.9MiB/1009msec) 00:12:11.519 slat (usec): min=2, max=20377, avg=140.07, stdev=1013.92 00:12:11.519 clat (usec): min=3007, max=52502, avg=17335.28, stdev=9891.66 00:12:11.519 lat (usec): min=3245, max=52510, avg=17475.35, stdev=9957.87 00:12:11.519 clat percentiles (usec): 00:12:11.519 | 1.00th=[ 5735], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[11207], 00:12:11.519 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12125], 60.00th=[12911], 00:12:11.519 | 70.00th=[22152], 80.00th=[23200], 90.00th=[30802], 95.00th=[38536], 00:12:11.519 | 99.00th=[52167], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:12:11.519 | 99.99th=[52691] 00:12:11.519 write: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec); 0 zone resets 00:12:11.519 slat (usec): min=3, max=23107, avg=104.48, stdev=613.83 00:12:11.519 clat (usec): min=1302, max=44016, avg=15133.08, stdev=6014.36 00:12:11.519 lat (usec): min=1321, max=44034, avg=15237.55, stdev=6047.61 00:12:11.519 clat percentiles (usec): 00:12:11.519 | 1.00th=[ 5866], 5.00th=[ 9372], 10.00th=[10814], 20.00th=[11076], 00:12:11.519 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[12911], 00:12:11.519 | 70.00th=[16188], 80.00th=[23200], 90.00th=[23987], 95.00th=[25297], 00:12:11.519 | 99.00th=[31851], 99.50th=[33162], 99.90th=[33162], 99.95th=[42730], 00:12:11.519 | 99.99th=[43779] 00:12:11.519 bw ( KiB/s): min=13696, max=19072, per=23.78%, avg=16384.00, stdev=3801.41, samples=2 00:12:11.519 iops : min= 3424, max= 4768, avg=4096.00, stdev=950.35, samples=2 00:12:11.519 lat (msec) : 2=0.03%, 4=0.23%, 10=8.92%, 20=59.99%, 50=30.20% 00:12:11.519 lat (msec) : 100=0.65% 00:12:11.519 cpu : usr=3.67%, sys=5.85%, ctx=401, majf=0, minf=1 00:12:11.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:11.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:11.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:11.519 issued rwts: total=3809,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:11.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:11.519 job2: (groupid=0, jobs=1): err= 0: pid=3103936: Mon Oct 28 15:07:58 2024 00:12:11.519 read: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec) 00:12:11.519 slat (usec): min=2, max=21015, avg=165.40, stdev=1159.49 00:12:11.519 clat (usec): min=6436, max=43429, avg=19745.03, stdev=6322.20 00:12:11.519 lat (usec): min=6442, max=43433, avg=19910.43, stdev=6414.93 00:12:11.519 clat percentiles (usec): 00:12:11.519 | 1.00th=[10159], 5.00th=[12125], 10.00th=[13829], 20.00th=[14615], 00:12:11.519 | 30.00th=[15533], 40.00th=[16712], 50.00th=[17957], 60.00th=[20579], 00:12:11.519 | 70.00th=[22414], 80.00th=[22938], 90.00th=[28705], 95.00th=[33162], 00:12:11.519 | 99.00th=[40109], 99.50th=[41681], 99.90th=[43254], 99.95th=[43254], 00:12:11.519 | 99.99th=[43254] 00:12:11.519 write: IOPS=3100, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1009msec); 0 zone resets 00:12:11.519 slat (usec): min=3, max=17906, avg=151.22, stdev=800.88 00:12:11.519 clat (usec): min=4721, max=62499, avg=21522.41, stdev=9344.97 00:12:11.519 lat (usec): min=4725, max=62506, avg=21673.62, stdev=9415.62 00:12:11.519 clat percentiles (usec): 00:12:11.519 | 1.00th=[ 8356], 5.00th=[ 9241], 10.00th=[12780], 20.00th=[14615], 00:12:11.519 | 30.00th=[15401], 40.00th=[19530], 50.00th=[21890], 60.00th=[23200], 00:12:11.519 | 70.00th=[23725], 80.00th=[24249], 90.00th=[30016], 95.00th=[41157], 00:12:11.519 | 99.00th=[58983], 99.50th=[59507], 99.90th=[62653], 99.95th=[62653], 00:12:11.519 | 99.99th=[62653] 00:12:11.519 bw ( KiB/s): min=12288, max=12312, per=17.85%, avg=12300.00, stdev=16.97, samples=2 00:12:11.519 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:12:11.519 lat (msec) : 10=3.23%, 20=48.15%, 50=46.95%, 100=1.68% 00:12:11.519 cpu : usr=1.98%, sys=3.97%, ctx=331, majf=0, minf=1 00:12:11.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:12:11.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:11.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:11.519 issued rwts: total=3072,3128,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:11.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:11.519 job3: (groupid=0, jobs=1): err= 0: pid=3103941: Mon Oct 28 15:07:58 2024 00:12:11.519 read: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec) 00:12:11.519 slat (usec): min=3, max=8281, avg=101.33, stdev=556.43 00:12:11.519 clat (usec): min=6250, max=21552, avg=13158.95, stdev=1748.07 00:12:11.519 lat (usec): min=6255, max=22774, avg=13260.28, stdev=1761.72 00:12:11.519 clat percentiles (usec): 00:12:11.519 | 1.00th=[ 9634], 5.00th=[10290], 10.00th=[11207], 20.00th=[11863], 00:12:11.519 | 30.00th=[12518], 40.00th=[12780], 50.00th=[12911], 60.00th=[13173], 00:12:11.519 | 70.00th=[13698], 80.00th=[14615], 90.00th=[15664], 95.00th=[15926], 00:12:11.519 | 99.00th=[18220], 99.50th=[18744], 99.90th=[19792], 99.95th=[20317], 00:12:11.519 | 99.99th=[21627] 00:12:11.519 write: IOPS=4519, BW=17.7MiB/s (18.5MB/s)(17.9MiB/1012msec); 0 zone resets 00:12:11.519 slat (usec): min=4, max=19542, avg=119.76, stdev=755.60 00:12:11.519 clat (usec): min=4506, max=51129, avg=16160.47, stdev=7304.11 00:12:11.519 lat (usec): min=4519, max=51186, avg=16280.23, stdev=7353.41 00:12:11.519 clat percentiles (usec): 00:12:11.519 | 1.00th=[ 7373], 5.00th=[10945], 10.00th=[11863], 20.00th=[12518], 00:12:11.519 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13173], 60.00th=[13566], 00:12:11.519 | 70.00th=[14353], 80.00th=[17957], 90.00th=[25297], 95.00th=[34341], 00:12:11.519 | 99.00th=[44303], 99.50th=[48497], 99.90th=[51119], 99.95th=[51119], 00:12:11.519 | 99.99th=[51119] 00:12:11.519 bw ( KiB/s): min=16992, max=18584, per=25.82%, avg=17788.00, stdev=1125.71, samples=2 00:12:11.519 iops : min= 4248, max= 4646, avg=4447.00, stdev=281.43, samples=2 00:12:11.519 lat (msec) : 10=2.69%, 20=87.83%, 50=9.35%, 100=0.13% 00:12:11.519 cpu : usr=4.35%, sys=6.23%, ctx=461, majf=0, minf=1 00:12:11.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:11.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:11.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:11.520 issued rwts: total=4096,4574,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:11.520 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:11.520 00:12:11.520 Run status group 0 (all jobs): 00:12:11.520 READ: bw=63.6MiB/s (66.7MB/s), 11.9MiB/s-21.3MiB/s (12.5MB/s-22.4MB/s), io=64.4MiB (67.5MB), run=1007-1012msec 00:12:11.520 WRITE: bw=67.3MiB/s (70.5MB/s), 12.1MiB/s-21.8MiB/s (12.7MB/s-22.9MB/s), io=68.1MiB (71.4MB), run=1007-1012msec 00:12:11.520 00:12:11.520 Disk stats (read/write): 00:12:11.520 nvme0n1: ios=4631/4687, merge=0/0, ticks=51246/47875, in_queue=99121, util=96.19% 00:12:11.520 nvme0n2: ios=3094/3232, merge=0/0, ticks=34621/35403, in_queue=70024, util=96.32% 00:12:11.520 nvme0n3: ios=2105/2560, merge=0/0, ticks=41823/50834, in_queue=92657, util=97.07% 00:12:11.520 nvme0n4: ios=3072/3567, merge=0/0, ticks=13798/21068, in_queue=34866, util=89.04% 00:12:11.520 15:07:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:11.520 15:07:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3104070 00:12:11.520 15:07:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:11.520 15:07:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:11.520 [global] 00:12:11.520 thread=1 00:12:11.520 invalidate=1 00:12:11.520 rw=read 00:12:11.520 time_based=1 00:12:11.520 runtime=10 00:12:11.520 ioengine=libaio 00:12:11.520 direct=1 00:12:11.520 bs=4096 00:12:11.520 iodepth=1 00:12:11.520 norandommap=1 00:12:11.520 numjobs=1 00:12:11.520 00:12:11.520 [job0] 00:12:11.520 filename=/dev/nvme0n1 00:12:11.520 [job1] 00:12:11.520 filename=/dev/nvme0n2 00:12:11.520 [job2] 00:12:11.520 filename=/dev/nvme0n3 00:12:11.520 [job3] 00:12:11.520 filename=/dev/nvme0n4 00:12:11.520 Could not set queue depth (nvme0n1) 00:12:11.520 Could not set queue depth (nvme0n2) 00:12:11.520 Could not set queue depth (nvme0n3) 00:12:11.520 Could not set queue depth (nvme0n4) 00:12:11.520 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:11.520 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:11.520 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:11.520 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:11.520 fio-3.35 00:12:11.520 Starting 4 threads 00:12:14.802 15:08:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:14.802 15:08:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:14.802 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=46997504, buflen=4096 00:12:14.802 fio: pid=3104291, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:15.059 15:08:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:15.059 15:08:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:15.059 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=21041152, buflen=4096 00:12:15.059 fio: pid=3104290, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:15.625 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=389120, buflen=4096 00:12:15.625 fio: pid=3104288, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:15.625 15:08:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:15.625 15:08:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:16.191 15:08:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:16.191 15:08:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:16.191 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=15589376, buflen=4096 00:12:16.191 fio: pid=3104289, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:12:16.191 00:12:16.191 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3104288: Mon Oct 28 15:08:02 2024 00:12:16.191 read: IOPS=24, BW=98.2KiB/s (101kB/s)(380KiB/3869msec) 00:12:16.191 slat (usec): min=10, max=14881, avg=325.96, stdev=2130.49 00:12:16.191 clat (usec): min=289, max=41984, avg=40146.57, stdev=5868.33 00:12:16.191 lat (usec): min=305, max=56065, avg=40475.77, stdev=6297.04 00:12:16.191 clat percentiles (usec): 00:12:16.191 | 1.00th=[ 289], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:12:16.191 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:16.191 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:12:16.191 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:16.191 | 99.99th=[42206] 00:12:16.191 bw ( KiB/s): min= 96, max= 104, per=0.52%, avg=98.71, stdev= 3.77, samples=7 00:12:16.191 iops : min= 24, max= 26, avg=24.57, stdev= 0.98, samples=7 00:12:16.191 lat (usec) : 500=2.08% 00:12:16.191 lat (msec) : 50=96.88% 00:12:16.191 cpu : usr=0.05%, sys=0.00%, ctx=98, majf=0, minf=2 00:12:16.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:16.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.191 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.191 issued rwts: total=96,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:16.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:16.191 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3104289: Mon Oct 28 15:08:02 2024 00:12:16.191 read: IOPS=877, BW=3509KiB/s (3594kB/s)(14.9MiB/4338msec) 00:12:16.191 slat (usec): min=4, max=15614, avg=24.81, stdev=373.09 00:12:16.191 clat (usec): min=177, max=49065, avg=1111.72, stdev=5708.71 00:12:16.191 lat (usec): min=183, max=56701, avg=1134.57, stdev=5775.09 00:12:16.191 clat percentiles (usec): 00:12:16.191 | 1.00th=[ 192], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 219], 00:12:16.191 | 30.00th=[ 233], 40.00th=[ 265], 50.00th=[ 297], 60.00th=[ 314], 00:12:16.191 | 70.00th=[ 334], 80.00th=[ 363], 90.00th=[ 404], 95.00th=[ 457], 00:12:16.191 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:12:16.191 | 99.99th=[49021] 00:12:16.191 bw ( KiB/s): min= 96, max=12424, per=20.06%, avg=3795.38, stdev=5523.21, samples=8 00:12:16.191 iops : min= 24, max= 3106, avg=948.75, stdev=1380.87, samples=8 00:12:16.191 lat (usec) : 250=37.04%, 500=59.29%, 750=1.52%, 1000=0.03% 00:12:16.191 lat (msec) : 2=0.05%, 4=0.03%, 10=0.03%, 50=2.00% 00:12:16.191 cpu : usr=0.62%, sys=1.31%, ctx=3812, majf=0, minf=1 00:12:16.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:16.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.191 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.191 issued rwts: total=3807,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:16.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:16.191 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3104290: Mon Oct 28 15:08:02 2024 00:12:16.191 read: IOPS=1508, BW=6031KiB/s (6176kB/s)(20.1MiB/3407msec) 00:12:16.191 slat (nsec): min=5113, max=64502, avg=10275.30, stdev=4425.11 00:12:16.191 clat (usec): min=187, max=47051, avg=645.77, stdev=4006.42 00:12:16.191 lat (usec): min=196, max=47062, avg=656.04, stdev=4006.70 00:12:16.191 clat percentiles (usec): 00:12:16.191 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 221], 00:12:16.191 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 247], 00:12:16.191 | 70.00th=[ 255], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 310], 00:12:16.191 | 99.00th=[ 807], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:12:16.191 | 99.99th=[46924] 00:12:16.191 bw ( KiB/s): min= 104, max=15464, per=36.15%, avg=6837.33, stdev=6912.89, samples=6 00:12:16.191 iops : min= 26, max= 3866, avg=1709.33, stdev=1728.22, samples=6 00:12:16.191 lat (usec) : 250=65.01%, 500=33.83%, 750=0.14%, 1000=0.02% 00:12:16.191 lat (msec) : 20=0.02%, 50=0.97% 00:12:16.191 cpu : usr=0.32%, sys=2.06%, ctx=5138, majf=0, minf=2 00:12:16.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:16.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.191 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.191 issued rwts: total=5138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:16.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:16.191 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3104291: Mon Oct 28 15:08:02 2024 00:12:16.191 read: IOPS=3739, BW=14.6MiB/s (15.3MB/s)(44.8MiB/3069msec) 00:12:16.191 slat (nsec): min=6818, max=62625, avg=10021.86, stdev=3620.31 00:12:16.191 clat (usec): min=172, max=2159, avg=252.48, stdev=59.15 00:12:16.191 lat (usec): min=180, max=2170, avg=262.50, stdev=60.05 00:12:16.191 clat percentiles (usec): 00:12:16.191 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 210], 00:12:16.191 | 30.00th=[ 219], 40.00th=[ 231], 50.00th=[ 243], 60.00th=[ 253], 00:12:16.191 | 70.00th=[ 269], 80.00th=[ 285], 90.00th=[ 310], 95.00th=[ 351], 00:12:16.191 | 99.00th=[ 490], 99.50th=[ 523], 99.90th=[ 619], 99.95th=[ 644], 00:12:16.191 | 99.99th=[ 1352] 00:12:16.191 bw ( KiB/s): min=11904, max=18064, per=79.76%, avg=15085.33, stdev=2438.56, samples=6 00:12:16.191 iops : min= 2976, max= 4516, avg=3771.33, stdev=609.64, samples=6 00:12:16.191 lat (usec) : 250=57.12%, 500=42.09%, 750=0.75% 00:12:16.191 lat (msec) : 2=0.03%, 4=0.01% 00:12:16.191 cpu : usr=2.25%, sys=5.67%, ctx=11476, majf=0, minf=1 00:12:16.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:16.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.191 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.191 issued rwts: total=11475,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:16.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:16.191 00:12:16.191 Run status group 0 (all jobs): 00:12:16.191 READ: bw=18.5MiB/s (19.4MB/s), 98.2KiB/s-14.6MiB/s (101kB/s-15.3MB/s), io=80.1MiB (84.0MB), run=3069-4338msec 00:12:16.191 00:12:16.191 Disk stats (read/write): 00:12:16.191 nvme0n1: ios=95/0, merge=0/0, ticks=3816/0, in_queue=3816, util=95.82% 00:12:16.191 nvme0n2: ios=3842/0, merge=0/0, ticks=5072/0, in_queue=5072, util=99.24% 00:12:16.191 nvme0n3: ios=5136/0, merge=0/0, ticks=3260/0, in_queue=3260, util=97.02% 00:12:16.191 nvme0n4: ios=10715/0, merge=0/0, ticks=3592/0, in_queue=3592, util=99.49% 00:12:16.450 15:08:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:16.450 15:08:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:17.016 15:08:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:17.016 15:08:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:17.274 15:08:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:17.274 15:08:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:18.291 15:08:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:18.291 15:08:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:18.549 15:08:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:18.549 15:08:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3104070 00:12:18.549 15:08:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:18.549 15:08:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:18.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.549 15:08:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:18.549 15:08:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:12:18.549 15:08:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:18.549 15:08:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.549 15:08:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:18.549 15:08:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.807 15:08:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:12:18.807 15:08:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:18.807 15:08:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:18.807 nvmf hotplug test: fio failed as expected 00:12:18.807 15:08:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.372 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:19.372 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:19.372 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:19.372 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:19.372 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:19.372 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:19.372 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:12:19.372 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:19.372 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:12:19.372 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:19.372 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:19.372 rmmod nvme_tcp 00:12:19.372 rmmod nvme_fabrics 00:12:19.372 rmmod nvme_keyring 00:12:19.372 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:19.372 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:12:19.372 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:12:19.372 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3101638 ']' 00:12:19.372 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3101638 00:12:19.372 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 3101638 ']' 00:12:19.372 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 3101638 00:12:19.372 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:12:19.372 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:19.372 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3101638 00:12:19.630 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:19.630 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:19.630 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3101638' 00:12:19.630 killing process with pid 3101638 00:12:19.630 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 3101638 00:12:19.630 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 3101638 00:12:19.891 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:19.891 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:19.891 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:19.891 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:12:19.891 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:12:19.891 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:19.891 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:19.891 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:19.891 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:19.891 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.891 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.891 15:08:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.799 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:21.799 00:12:21.799 real 0m31.915s 00:12:21.799 user 1m56.849s 00:12:21.799 sys 0m8.482s 00:12:21.799 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.799 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.799 ************************************ 00:12:21.799 END TEST nvmf_fio_target 00:12:21.799 ************************************ 00:12:21.799 15:08:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:21.799 15:08:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:21.799 15:08:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.799 15:08:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:22.059 ************************************ 00:12:22.059 START TEST nvmf_bdevio 00:12:22.059 ************************************ 00:12:22.059 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:22.059 * Looking for test storage... 00:12:22.059 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:22.059 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:12:22.059 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1689 -- # lcov --version 00:12:22.059 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:12:22.059 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:12:22.059 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:22.059 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:22.059 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:22.059 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:22.059 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:22.059 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:22.059 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:22.059 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:22.059 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:22.059 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:22.059 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:22.059 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:22.059 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:22.059 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:12:22.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.060 --rc genhtml_branch_coverage=1 00:12:22.060 --rc genhtml_function_coverage=1 00:12:22.060 --rc genhtml_legend=1 00:12:22.060 --rc geninfo_all_blocks=1 00:12:22.060 --rc geninfo_unexecuted_blocks=1 00:12:22.060 00:12:22.060 ' 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:12:22.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.060 --rc genhtml_branch_coverage=1 00:12:22.060 --rc genhtml_function_coverage=1 00:12:22.060 --rc genhtml_legend=1 00:12:22.060 --rc geninfo_all_blocks=1 00:12:22.060 --rc geninfo_unexecuted_blocks=1 00:12:22.060 00:12:22.060 ' 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:12:22.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.060 --rc genhtml_branch_coverage=1 00:12:22.060 --rc genhtml_function_coverage=1 00:12:22.060 --rc genhtml_legend=1 00:12:22.060 --rc geninfo_all_blocks=1 00:12:22.060 --rc geninfo_unexecuted_blocks=1 00:12:22.060 00:12:22.060 ' 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:12:22.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.060 --rc genhtml_branch_coverage=1 00:12:22.060 --rc genhtml_function_coverage=1 00:12:22.060 --rc genhtml_legend=1 00:12:22.060 --rc geninfo_all_blocks=1 00:12:22.060 --rc geninfo_unexecuted_blocks=1 00:12:22.060 00:12:22.060 ' 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:22.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:22.060 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:22.061 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:22.061 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:22.061 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:22.061 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.061 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:22.061 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.061 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:22.061 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:22.061 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:12:22.061 15:08:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:25.348 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:25.348 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:25.348 Found net devices under 0000:84:00.0: cvl_0_0 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:25.348 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:25.349 Found net devices under 0000:84:00.1: cvl_0_1 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:25.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:25.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:12:25.349 00:12:25.349 --- 10.0.0.2 ping statistics --- 00:12:25.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.349 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:25.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:25.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:12:25.349 00:12:25.349 --- 10.0.0.1 ping statistics --- 00:12:25.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.349 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3107214 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3107214 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 3107214 ']' 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:25.349 15:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:25.349 [2024-10-28 15:08:11.976112] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:12:25.349 [2024-10-28 15:08:11.976206] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.349 [2024-10-28 15:08:12.065427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:25.349 [2024-10-28 15:08:12.133873] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.349 [2024-10-28 15:08:12.133948] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.349 [2024-10-28 15:08:12.133966] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.349 [2024-10-28 15:08:12.133980] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.349 [2024-10-28 15:08:12.133992] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.349 [2024-10-28 15:08:12.135860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:25.349 [2024-10-28 15:08:12.135918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:25.349 [2024-10-28 15:08:12.135970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:25.349 [2024-10-28 15:08:12.135973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:25.609 [2024-10-28 15:08:12.296809] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:25.609 Malloc0 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:25.609 [2024-10-28 15:08:12.371937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:25.609 { 00:12:25.609 "params": { 00:12:25.609 "name": "Nvme$subsystem", 00:12:25.609 "trtype": "$TEST_TRANSPORT", 00:12:25.609 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:25.609 "adrfam": "ipv4", 00:12:25.609 "trsvcid": "$NVMF_PORT", 00:12:25.609 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:25.609 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:25.609 "hdgst": ${hdgst:-false}, 00:12:25.609 "ddgst": ${ddgst:-false} 00:12:25.609 }, 00:12:25.609 "method": "bdev_nvme_attach_controller" 00:12:25.609 } 00:12:25.609 EOF 00:12:25.609 )") 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:12:25.609 15:08:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:25.609 "params": { 00:12:25.609 "name": "Nvme1", 00:12:25.609 "trtype": "tcp", 00:12:25.609 "traddr": "10.0.0.2", 00:12:25.609 "adrfam": "ipv4", 00:12:25.609 "trsvcid": "4420", 00:12:25.609 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:25.609 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:25.609 "hdgst": false, 00:12:25.609 "ddgst": false 00:12:25.609 }, 00:12:25.609 "method": "bdev_nvme_attach_controller" 00:12:25.609 }' 00:12:25.609 [2024-10-28 15:08:12.422759] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:12:25.609 [2024-10-28 15:08:12.422842] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3107358 ] 00:12:25.867 [2024-10-28 15:08:12.504182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:25.867 [2024-10-28 15:08:12.574621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.867 [2024-10-28 15:08:12.574682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.867 [2024-10-28 15:08:12.574687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.126 I/O targets: 00:12:26.126 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:26.126 00:12:26.126 00:12:26.126 CUnit - A unit testing framework for C - Version 2.1-3 00:12:26.126 http://cunit.sourceforge.net/ 00:12:26.126 00:12:26.126 00:12:26.126 Suite: bdevio tests on: Nvme1n1 00:12:26.126 Test: blockdev write read block ...passed 00:12:26.126 Test: blockdev write zeroes read block ...passed 00:12:26.126 Test: blockdev write zeroes read no split ...passed 00:12:26.126 Test: blockdev write zeroes read split ...passed 00:12:26.126 Test: blockdev write zeroes read split partial ...passed 00:12:26.126 Test: blockdev reset ...[2024-10-28 15:08:12.929522] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:26.126 [2024-10-28 15:08:12.929663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20eae80 (9): Bad file descriptor 00:12:26.126 [2024-10-28 15:08:12.983793] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:26.126 passed 00:12:26.126 Test: blockdev write read 8 blocks ...passed 00:12:26.126 Test: blockdev write read size > 128k ...passed 00:12:26.126 Test: blockdev write read invalid size ...passed 00:12:26.384 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:26.384 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:26.384 Test: blockdev write read max offset ...passed 00:12:26.384 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:26.384 Test: blockdev writev readv 8 blocks ...passed 00:12:26.384 Test: blockdev writev readv 30 x 1block ...passed 00:12:26.384 Test: blockdev writev readv block ...passed 00:12:26.384 Test: blockdev writev readv size > 128k ...passed 00:12:26.384 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:26.384 Test: blockdev comparev and writev ...[2024-10-28 15:08:13.236574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:26.384 [2024-10-28 15:08:13.236617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:26.384 [2024-10-28 15:08:13.236646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:26.384 [2024-10-28 15:08:13.236678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:26.384 [2024-10-28 15:08:13.237068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:26.384 [2024-10-28 15:08:13.237098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:26.384 [2024-10-28 15:08:13.237132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:26.384 [2024-10-28 15:08:13.237152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:26.384 [2024-10-28 15:08:13.237545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:26.384 [2024-10-28 15:08:13.237579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:26.384 [2024-10-28 15:08:13.237604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:26.384 [2024-10-28 15:08:13.237623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:26.384 [2024-10-28 15:08:13.238026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:26.384 [2024-10-28 15:08:13.238056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:26.384 [2024-10-28 15:08:13.238082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:26.384 [2024-10-28 15:08:13.238101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:26.643 passed 00:12:26.643 Test: blockdev nvme passthru rw ...passed 00:12:26.643 Test: blockdev nvme passthru vendor specific ...[2024-10-28 15:08:13.319963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:26.643 [2024-10-28 15:08:13.319995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:26.643 [2024-10-28 15:08:13.320151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:26.643 [2024-10-28 15:08:13.320178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:26.643 [2024-10-28 15:08:13.320329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:26.643 [2024-10-28 15:08:13.320356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:26.643 [2024-10-28 15:08:13.320507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:26.643 [2024-10-28 15:08:13.320535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:26.643 passed 00:12:26.643 Test: blockdev nvme admin passthru ...passed 00:12:26.643 Test: blockdev copy ...passed 00:12:26.643 00:12:26.643 Run Summary: Type Total Ran Passed Failed Inactive 00:12:26.643 suites 1 1 n/a 0 0 00:12:26.643 tests 23 23 23 0 0 00:12:26.643 asserts 152 152 152 0 n/a 00:12:26.643 00:12:26.643 Elapsed time = 1.113 seconds 00:12:26.901 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.901 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.901 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:26.901 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.901 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:26.901 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:26.901 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:26.901 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:26.901 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:26.901 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:26.901 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:26.901 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:26.901 rmmod nvme_tcp 00:12:26.901 rmmod nvme_fabrics 00:12:26.901 rmmod nvme_keyring 00:12:26.901 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:26.901 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:26.901 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:26.901 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3107214 ']' 00:12:26.901 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3107214 00:12:26.901 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 3107214 ']' 00:12:26.901 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 3107214 00:12:26.901 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:12:26.901 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:26.901 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3107214 00:12:26.901 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:12:26.901 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:12:26.901 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3107214' 00:12:26.901 killing process with pid 3107214 00:12:26.901 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 3107214 00:12:26.901 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 3107214 00:12:27.159 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:27.159 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:27.159 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:27.159 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:12:27.159 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:12:27.159 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:27.159 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:12:27.159 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:27.159 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:27.159 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.159 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.159 15:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:29.697 00:12:29.697 real 0m7.343s 00:12:29.697 user 0m10.517s 00:12:29.697 sys 0m2.960s 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:29.697 ************************************ 00:12:29.697 END TEST nvmf_bdevio 00:12:29.697 ************************************ 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:29.697 00:12:29.697 real 4m47.561s 00:12:29.697 user 12m15.856s 00:12:29.697 sys 1m25.245s 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:29.697 ************************************ 00:12:29.697 END TEST nvmf_target_core 00:12:29.697 ************************************ 00:12:29.697 15:08:16 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:29.697 15:08:16 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:29.697 15:08:16 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:29.697 15:08:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:29.697 ************************************ 00:12:29.697 START TEST nvmf_target_extra 00:12:29.697 ************************************ 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:29.697 * Looking for test storage... 00:12:29.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1689 -- # lcov --version 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:12:29.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.697 --rc genhtml_branch_coverage=1 00:12:29.697 --rc genhtml_function_coverage=1 00:12:29.697 --rc genhtml_legend=1 00:12:29.697 --rc geninfo_all_blocks=1 00:12:29.697 --rc geninfo_unexecuted_blocks=1 00:12:29.697 00:12:29.697 ' 00:12:29.697 15:08:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:12:29.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.697 --rc genhtml_branch_coverage=1 00:12:29.697 --rc genhtml_function_coverage=1 00:12:29.697 --rc genhtml_legend=1 00:12:29.697 --rc geninfo_all_blocks=1 00:12:29.698 --rc geninfo_unexecuted_blocks=1 00:12:29.698 00:12:29.698 ' 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:12:29.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.698 --rc genhtml_branch_coverage=1 00:12:29.698 --rc genhtml_function_coverage=1 00:12:29.698 --rc genhtml_legend=1 00:12:29.698 --rc geninfo_all_blocks=1 00:12:29.698 --rc geninfo_unexecuted_blocks=1 00:12:29.698 00:12:29.698 ' 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:12:29.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.698 --rc genhtml_branch_coverage=1 00:12:29.698 --rc genhtml_function_coverage=1 00:12:29.698 --rc genhtml_legend=1 00:12:29.698 --rc geninfo_all_blocks=1 00:12:29.698 --rc geninfo_unexecuted_blocks=1 00:12:29.698 00:12:29.698 ' 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:29.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:29.698 ************************************ 00:12:29.698 START TEST nvmf_example 00:12:29.698 ************************************ 00:12:29.698 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:29.958 * Looking for test storage... 00:12:29.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1689 -- # lcov --version 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:12:29.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.958 --rc genhtml_branch_coverage=1 00:12:29.958 --rc genhtml_function_coverage=1 00:12:29.958 --rc genhtml_legend=1 00:12:29.958 --rc geninfo_all_blocks=1 00:12:29.958 --rc geninfo_unexecuted_blocks=1 00:12:29.958 00:12:29.958 ' 00:12:29.958 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:12:29.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.959 --rc genhtml_branch_coverage=1 00:12:29.959 --rc genhtml_function_coverage=1 00:12:29.959 --rc genhtml_legend=1 00:12:29.959 --rc geninfo_all_blocks=1 00:12:29.959 --rc geninfo_unexecuted_blocks=1 00:12:29.959 00:12:29.959 ' 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:12:29.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.959 --rc genhtml_branch_coverage=1 00:12:29.959 --rc genhtml_function_coverage=1 00:12:29.959 --rc genhtml_legend=1 00:12:29.959 --rc geninfo_all_blocks=1 00:12:29.959 --rc geninfo_unexecuted_blocks=1 00:12:29.959 00:12:29.959 ' 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:12:29.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.959 --rc genhtml_branch_coverage=1 00:12:29.959 --rc genhtml_function_coverage=1 00:12:29.959 --rc genhtml_legend=1 00:12:29.959 --rc geninfo_all_blocks=1 00:12:29.959 --rc geninfo_unexecuted_blocks=1 00:12:29.959 00:12:29.959 ' 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:29.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:29.959 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:30.219 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:30.219 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:30.219 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:30.219 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:30.219 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:30.219 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:30.219 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:30.219 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:30.219 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:30.219 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:30.219 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:30.219 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:30.219 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.219 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:30.219 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:30.219 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:30.219 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.219 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.219 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.219 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:30.219 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:30.219 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:12:30.219 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:33.510 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:33.510 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:33.510 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:33.511 Found net devices under 0000:84:00.0: cvl_0_0 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:33.511 Found net devices under 0000:84:00.1: cvl_0_1 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:33.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:33.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:12:33.511 00:12:33.511 --- 10.0.0.2 ping statistics --- 00:12:33.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.511 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:33.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:33.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:12:33.511 00:12:33.511 --- 10.0.0.1 ping statistics --- 00:12:33.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.511 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3109646 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3109646 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 3109646 ']' 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:33.511 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:33.511 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:33.511 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:12:33.511 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:33.511 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:33.511 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:33.511 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:33.511 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.511 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:33.769 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.769 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:33.769 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.769 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:33.769 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.769 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:33.769 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:33.769 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.769 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:33.769 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.769 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:33.769 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:33.769 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.769 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:33.769 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.769 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.769 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.769 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:33.769 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.769 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:33.769 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:45.967 Initializing NVMe Controllers 00:12:45.967 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:45.967 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:45.967 Initialization complete. Launching workers. 00:12:45.967 ======================================================== 00:12:45.967 Latency(us) 00:12:45.967 Device Information : IOPS MiB/s Average min max 00:12:45.967 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14675.44 57.33 4360.37 639.23 16344.58 00:12:45.967 ======================================================== 00:12:45.967 Total : 14675.44 57.33 4360.37 639.23 16344.58 00:12:45.967 00:12:45.967 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:45.967 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:45.967 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:45.967 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:45.967 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:45.967 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:45.967 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:45.967 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:45.967 rmmod nvme_tcp 00:12:45.967 rmmod nvme_fabrics 00:12:45.967 rmmod nvme_keyring 00:12:45.967 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:45.967 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:45.967 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:45.967 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3109646 ']' 00:12:45.967 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3109646 00:12:45.967 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 3109646 ']' 00:12:45.967 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 3109646 00:12:45.967 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:12:45.967 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:45.967 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3109646 00:12:45.967 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:12:45.967 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:12:45.967 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3109646' 00:12:45.967 killing process with pid 3109646 00:12:45.967 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 3109646 00:12:45.967 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 3109646 00:12:45.967 nvmf threads initialize successfully 00:12:45.967 bdev subsystem init successfully 00:12:45.967 created a nvmf target service 00:12:45.967 create targets's poll groups done 00:12:45.967 all subsystems of target started 00:12:45.967 nvmf target is running 00:12:45.967 all subsystems of target stopped 00:12:45.967 destroy targets's poll groups done 00:12:45.967 destroyed the nvmf target service 00:12:45.967 bdev subsystem finish successfully 00:12:45.967 nvmf threads destroy successfully 00:12:45.967 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:45.967 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:45.967 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:45.967 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:12:45.967 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:12:45.967 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:45.967 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:12:45.967 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:45.967 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:45.967 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.967 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:45.967 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.535 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:46.535 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:46.535 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:46.535 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:46.535 00:12:46.535 real 0m16.670s 00:12:46.535 user 0m42.845s 00:12:46.535 sys 0m4.441s 00:12:46.535 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:46.535 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:46.535 ************************************ 00:12:46.535 END TEST nvmf_example 00:12:46.535 ************************************ 00:12:46.535 15:08:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:46.535 15:08:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:46.535 15:08:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:46.535 15:08:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:46.535 ************************************ 00:12:46.535 START TEST nvmf_filesystem 00:12:46.535 ************************************ 00:12:46.535 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:46.535 * Looking for test storage... 00:12:46.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:46.535 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:12:46.535 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # lcov --version 00:12:46.535 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:12:46.800 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:12:46.800 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:46.800 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:46.800 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:46.800 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:46.800 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:46.800 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:46.800 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:46.800 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:46.800 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:46.800 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:46.800 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:46.800 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:46.800 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:46.800 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:46.800 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:46.800 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:46.800 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:46.800 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:46.800 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:46.800 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:46.800 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:46.800 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:12:46.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.801 --rc genhtml_branch_coverage=1 00:12:46.801 --rc genhtml_function_coverage=1 00:12:46.801 --rc genhtml_legend=1 00:12:46.801 --rc geninfo_all_blocks=1 00:12:46.801 --rc geninfo_unexecuted_blocks=1 00:12:46.801 00:12:46.801 ' 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:12:46.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.801 --rc genhtml_branch_coverage=1 00:12:46.801 --rc genhtml_function_coverage=1 00:12:46.801 --rc genhtml_legend=1 00:12:46.801 --rc geninfo_all_blocks=1 00:12:46.801 --rc geninfo_unexecuted_blocks=1 00:12:46.801 00:12:46.801 ' 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:12:46.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.801 --rc genhtml_branch_coverage=1 00:12:46.801 --rc genhtml_function_coverage=1 00:12:46.801 --rc genhtml_legend=1 00:12:46.801 --rc geninfo_all_blocks=1 00:12:46.801 --rc geninfo_unexecuted_blocks=1 00:12:46.801 00:12:46.801 ' 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:12:46.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.801 --rc genhtml_branch_coverage=1 00:12:46.801 --rc genhtml_function_coverage=1 00:12:46.801 --rc genhtml_legend=1 00:12:46.801 --rc geninfo_all_blocks=1 00:12:46.801 --rc geninfo_unexecuted_blocks=1 00:12:46.801 00:12:46.801 ' 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:46.801 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:46.802 #define SPDK_CONFIG_H 00:12:46.802 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:46.802 #define SPDK_CONFIG_APPS 1 00:12:46.802 #define SPDK_CONFIG_ARCH native 00:12:46.802 #undef SPDK_CONFIG_ASAN 00:12:46.802 #undef SPDK_CONFIG_AVAHI 00:12:46.802 #undef SPDK_CONFIG_CET 00:12:46.802 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:46.802 #define SPDK_CONFIG_COVERAGE 1 00:12:46.802 #define SPDK_CONFIG_CROSS_PREFIX 00:12:46.802 #undef SPDK_CONFIG_CRYPTO 00:12:46.802 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:46.802 #undef SPDK_CONFIG_CUSTOMOCF 00:12:46.802 #undef SPDK_CONFIG_DAOS 00:12:46.802 #define SPDK_CONFIG_DAOS_DIR 00:12:46.802 #define SPDK_CONFIG_DEBUG 1 00:12:46.802 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:46.802 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:46.802 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:46.802 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:46.802 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:46.802 #undef SPDK_CONFIG_DPDK_UADK 00:12:46.802 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:46.802 #define SPDK_CONFIG_EXAMPLES 1 00:12:46.802 #undef SPDK_CONFIG_FC 00:12:46.802 #define SPDK_CONFIG_FC_PATH 00:12:46.802 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:46.802 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:46.802 #define SPDK_CONFIG_FSDEV 1 00:12:46.802 #undef SPDK_CONFIG_FUSE 00:12:46.802 #undef SPDK_CONFIG_FUZZER 00:12:46.802 #define SPDK_CONFIG_FUZZER_LIB 00:12:46.802 #undef SPDK_CONFIG_GOLANG 00:12:46.802 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:46.802 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:46.802 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:46.802 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:46.802 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:46.802 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:46.802 #undef SPDK_CONFIG_HAVE_LZ4 00:12:46.802 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:46.802 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:46.802 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:46.802 #define SPDK_CONFIG_IDXD 1 00:12:46.802 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:46.802 #undef SPDK_CONFIG_IPSEC_MB 00:12:46.802 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:46.802 #define SPDK_CONFIG_ISAL 1 00:12:46.802 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:46.802 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:46.802 #define SPDK_CONFIG_LIBDIR 00:12:46.802 #undef SPDK_CONFIG_LTO 00:12:46.802 #define SPDK_CONFIG_MAX_LCORES 128 00:12:46.802 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:46.802 #define SPDK_CONFIG_NVME_CUSE 1 00:12:46.802 #undef SPDK_CONFIG_OCF 00:12:46.802 #define SPDK_CONFIG_OCF_PATH 00:12:46.802 #define SPDK_CONFIG_OPENSSL_PATH 00:12:46.802 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:46.802 #define SPDK_CONFIG_PGO_DIR 00:12:46.802 #undef SPDK_CONFIG_PGO_USE 00:12:46.802 #define SPDK_CONFIG_PREFIX /usr/local 00:12:46.802 #undef SPDK_CONFIG_RAID5F 00:12:46.802 #undef SPDK_CONFIG_RBD 00:12:46.802 #define SPDK_CONFIG_RDMA 1 00:12:46.802 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:46.802 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:46.802 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:46.802 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:46.802 #define SPDK_CONFIG_SHARED 1 00:12:46.802 #undef SPDK_CONFIG_SMA 00:12:46.802 #define SPDK_CONFIG_TESTS 1 00:12:46.802 #undef SPDK_CONFIG_TSAN 00:12:46.802 #define SPDK_CONFIG_UBLK 1 00:12:46.802 #define SPDK_CONFIG_UBSAN 1 00:12:46.802 #undef SPDK_CONFIG_UNIT_TESTS 00:12:46.802 #undef SPDK_CONFIG_URING 00:12:46.802 #define SPDK_CONFIG_URING_PATH 00:12:46.802 #undef SPDK_CONFIG_URING_ZNS 00:12:46.802 #undef SPDK_CONFIG_USDT 00:12:46.802 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:46.802 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:46.802 #define SPDK_CONFIG_VFIO_USER 1 00:12:46.802 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:46.802 #define SPDK_CONFIG_VHOST 1 00:12:46.802 #define SPDK_CONFIG_VIRTIO 1 00:12:46.802 #undef SPDK_CONFIG_VTUNE 00:12:46.802 #define SPDK_CONFIG_VTUNE_DIR 00:12:46.802 #define SPDK_CONFIG_WERROR 1 00:12:46.802 #define SPDK_CONFIG_WPDK_DIR 00:12:46.802 #undef SPDK_CONFIG_XNVME 00:12:46.802 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:46.802 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:46.803 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:12:46.804 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j48 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 3111330 ]] 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 3111330 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1674 -- # set_test_storage 2147483648 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.ySYATu 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ySYATu/tests/target /tmp/spdk.ySYATu 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=660762624 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4623667200 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=38168645632 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=45077078016 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6908432384 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=22528507904 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=22538539008 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=8992956416 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=9015418880 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=22462464 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=22537371648 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=22538539008 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=1167360 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4507693056 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=4507705344 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:12:46.805 * Looking for test storage... 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:46.805 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:12:46.806 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=38168645632 00:12:46.806 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:12:46.806 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:12:46.806 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:12:46.806 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:12:46.806 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:12:46.806 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=9123024896 00:12:46.806 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:46.806 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:46.806 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:46.806 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:46.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:46.806 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:12:46.806 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set -o errtrace 00:12:46.806 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1677 -- # shopt -s extdebug 00:12:46.806 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:46.806 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:46.806 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # true 00:12:46.806 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # xtrace_fd 00:12:46.806 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:46.806 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:46.806 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:46.806 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:46.806 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:46.806 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:46.806 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:46.806 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:46.806 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:12:46.806 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # lcov --version 00:12:46.806 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:12:47.066 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:12:47.066 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:47.066 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:47.066 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:47.066 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:47.066 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:47.066 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:47.066 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:47.066 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:47.066 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:47.066 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:47.066 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:12:47.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.067 --rc genhtml_branch_coverage=1 00:12:47.067 --rc genhtml_function_coverage=1 00:12:47.067 --rc genhtml_legend=1 00:12:47.067 --rc geninfo_all_blocks=1 00:12:47.067 --rc geninfo_unexecuted_blocks=1 00:12:47.067 00:12:47.067 ' 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:12:47.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.067 --rc genhtml_branch_coverage=1 00:12:47.067 --rc genhtml_function_coverage=1 00:12:47.067 --rc genhtml_legend=1 00:12:47.067 --rc geninfo_all_blocks=1 00:12:47.067 --rc geninfo_unexecuted_blocks=1 00:12:47.067 00:12:47.067 ' 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:12:47.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.067 --rc genhtml_branch_coverage=1 00:12:47.067 --rc genhtml_function_coverage=1 00:12:47.067 --rc genhtml_legend=1 00:12:47.067 --rc geninfo_all_blocks=1 00:12:47.067 --rc geninfo_unexecuted_blocks=1 00:12:47.067 00:12:47.067 ' 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:12:47.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.067 --rc genhtml_branch_coverage=1 00:12:47.067 --rc genhtml_function_coverage=1 00:12:47.067 --rc genhtml_legend=1 00:12:47.067 --rc geninfo_all_blocks=1 00:12:47.067 --rc geninfo_unexecuted_blocks=1 00:12:47.067 00:12:47.067 ' 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:47.067 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:47.067 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:47.068 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:47.068 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:47.068 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:47.068 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:47.068 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:47.068 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.068 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:47.068 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.068 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:47.068 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:47.068 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:47.068 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:50.383 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:50.383 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:50.383 Found net devices under 0000:84:00.0: cvl_0_0 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.383 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:50.384 Found net devices under 0000:84:00.1: cvl_0_1 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:50.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:50.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:12:50.384 00:12:50.384 --- 10.0.0.2 ping statistics --- 00:12:50.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.384 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:50.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:50.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:12:50.384 00:12:50.384 --- 10.0.0.1 ping statistics --- 00:12:50.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.384 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:50.384 ************************************ 00:12:50.384 START TEST nvmf_filesystem_no_in_capsule 00:12:50.384 ************************************ 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3113116 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3113116 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3113116 ']' 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:50.384 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:50.384 [2024-10-28 15:08:37.028196] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:12:50.384 [2024-10-28 15:08:37.028298] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.384 [2024-10-28 15:08:37.175554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:50.643 [2024-10-28 15:08:37.303448] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:50.643 [2024-10-28 15:08:37.303549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:50.643 [2024-10-28 15:08:37.303585] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:50.643 [2024-10-28 15:08:37.303615] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:50.643 [2024-10-28 15:08:37.303642] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:50.643 [2024-10-28 15:08:37.307065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.643 [2024-10-28 15:08:37.307168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:50.643 [2024-10-28 15:08:37.307257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:50.643 [2024-10-28 15:08:37.307261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.643 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:50.643 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:50.643 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:50.643 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:50.643 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:50.643 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:50.643 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:50.643 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:50.643 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.643 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:50.643 [2024-10-28 15:08:37.500442] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:50.643 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.643 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:50.643 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.643 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:50.901 Malloc1 00:12:50.901 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.901 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:50.901 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.901 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:50.901 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.902 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:50.902 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.902 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:50.902 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.902 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:50.902 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.902 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:50.902 [2024-10-28 15:08:37.702235] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.902 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.902 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:50.902 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:50.902 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:50.902 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:50.902 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:50.902 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:50.902 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.902 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:50.902 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.902 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:50.902 { 00:12:50.902 "name": "Malloc1", 00:12:50.902 "aliases": [ 00:12:50.902 "30a41eed-9e63-41c8-b25d-a8ffa9ca32f6" 00:12:50.902 ], 00:12:50.902 "product_name": "Malloc disk", 00:12:50.902 "block_size": 512, 00:12:50.902 "num_blocks": 1048576, 00:12:50.902 "uuid": "30a41eed-9e63-41c8-b25d-a8ffa9ca32f6", 00:12:50.902 "assigned_rate_limits": { 00:12:50.902 "rw_ios_per_sec": 0, 00:12:50.902 "rw_mbytes_per_sec": 0, 00:12:50.902 "r_mbytes_per_sec": 0, 00:12:50.902 "w_mbytes_per_sec": 0 00:12:50.902 }, 00:12:50.902 "claimed": true, 00:12:50.902 "claim_type": "exclusive_write", 00:12:50.902 "zoned": false, 00:12:50.902 "supported_io_types": { 00:12:50.902 "read": true, 00:12:50.902 "write": true, 00:12:50.902 "unmap": true, 00:12:50.902 "flush": true, 00:12:50.902 "reset": true, 00:12:50.902 "nvme_admin": false, 00:12:50.902 "nvme_io": false, 00:12:50.902 "nvme_io_md": false, 00:12:50.902 "write_zeroes": true, 00:12:50.902 "zcopy": true, 00:12:50.902 "get_zone_info": false, 00:12:50.902 "zone_management": false, 00:12:50.902 "zone_append": false, 00:12:50.902 "compare": false, 00:12:50.902 "compare_and_write": false, 00:12:50.902 "abort": true, 00:12:50.902 "seek_hole": false, 00:12:50.902 "seek_data": false, 00:12:50.902 "copy": true, 00:12:50.902 "nvme_iov_md": false 00:12:50.902 }, 00:12:50.902 "memory_domains": [ 00:12:50.902 { 00:12:50.902 "dma_device_id": "system", 00:12:50.902 "dma_device_type": 1 00:12:50.902 }, 00:12:50.902 { 00:12:50.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.902 "dma_device_type": 2 00:12:50.902 } 00:12:50.902 ], 00:12:50.902 "driver_specific": {} 00:12:50.902 } 00:12:50.902 ]' 00:12:50.902 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:51.160 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:51.160 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:51.160 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:51.160 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:51.160 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:51.160 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:51.160 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:51.730 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:51.730 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:51.730 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:51.730 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:51.730 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:54.255 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:54.255 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:54.255 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:54.255 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:54.255 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:54.255 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:54.255 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:54.255 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:54.255 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:54.255 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:54.255 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:54.255 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:54.255 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:54.255 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:54.255 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:54.255 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:54.255 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:54.255 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:54.820 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:55.752 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:55.752 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:55.752 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:55.752 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:55.752 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:55.752 ************************************ 00:12:55.752 START TEST filesystem_ext4 00:12:55.752 ************************************ 00:12:55.752 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:55.752 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:55.752 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:55.752 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:55.752 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:55.752 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:55.752 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:55.752 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:55.752 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:55.752 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:55.752 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:55.752 mke2fs 1.47.0 (5-Feb-2023) 00:12:55.753 Discarding device blocks: 0/522240 done 00:12:55.753 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:55.753 Filesystem UUID: 18060862-4279-44cb-aa27-abe582482b14 00:12:55.753 Superblock backups stored on blocks: 00:12:55.753 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:55.753 00:12:55.753 Allocating group tables: 0/64 done 00:12:55.753 Writing inode tables: 0/64 done 00:12:56.010 Creating journal (8192 blocks): done 00:12:58.312 Writing superblocks and filesystem accounting information: 0/64 done 00:12:58.312 00:12:58.312 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:58.312 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:03.564 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:03.823 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:13:03.823 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:03.823 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:13:03.823 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:03.823 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:03.823 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3113116 00:13:03.823 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:03.823 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:03.823 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:03.823 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:03.823 00:13:03.823 real 0m8.046s 00:13:03.823 user 0m0.022s 00:13:03.823 sys 0m0.062s 00:13:03.823 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:03.823 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:03.823 ************************************ 00:13:03.823 END TEST filesystem_ext4 00:13:03.823 ************************************ 00:13:03.823 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:03.823 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:03.823 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:03.823 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.823 ************************************ 00:13:03.823 START TEST filesystem_btrfs 00:13:03.823 ************************************ 00:13:03.823 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:03.823 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:03.823 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:03.823 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:03.823 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:13:03.823 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:03.823 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:13:03.823 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:13:03.823 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:13:03.823 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:13:03.823 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:04.081 btrfs-progs v6.8.1 00:13:04.081 See https://btrfs.readthedocs.io for more information. 00:13:04.081 00:13:04.081 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:04.081 NOTE: several default settings have changed in version 5.15, please make sure 00:13:04.081 this does not affect your deployments: 00:13:04.081 - DUP for metadata (-m dup) 00:13:04.081 - enabled no-holes (-O no-holes) 00:13:04.081 - enabled free-space-tree (-R free-space-tree) 00:13:04.081 00:13:04.081 Label: (null) 00:13:04.081 UUID: b46e9f06-2b00-4083-b356-1ef027bf8418 00:13:04.081 Node size: 16384 00:13:04.081 Sector size: 4096 (CPU page size: 4096) 00:13:04.081 Filesystem size: 510.00MiB 00:13:04.081 Block group profiles: 00:13:04.081 Data: single 8.00MiB 00:13:04.081 Metadata: DUP 32.00MiB 00:13:04.081 System: DUP 8.00MiB 00:13:04.081 SSD detected: yes 00:13:04.081 Zoned device: no 00:13:04.081 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:04.081 Checksum: crc32c 00:13:04.081 Number of devices: 1 00:13:04.081 Devices: 00:13:04.081 ID SIZE PATH 00:13:04.081 1 510.00MiB /dev/nvme0n1p1 00:13:04.081 00:13:04.081 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:13:04.081 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:05.056 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:05.056 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:13:05.056 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:05.056 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:13:05.057 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:05.057 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:05.057 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3113116 00:13:05.057 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:05.057 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:05.057 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:05.057 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:05.057 00:13:05.057 real 0m1.253s 00:13:05.057 user 0m0.024s 00:13:05.057 sys 0m0.112s 00:13:05.057 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:05.057 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:05.057 ************************************ 00:13:05.057 END TEST filesystem_btrfs 00:13:05.057 ************************************ 00:13:05.057 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:05.057 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:05.057 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:05.057 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:05.057 ************************************ 00:13:05.057 START TEST filesystem_xfs 00:13:05.057 ************************************ 00:13:05.057 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:13:05.057 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:05.057 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:05.057 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:05.057 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:13:05.057 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:05.057 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:13:05.057 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:13:05.057 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:13:05.057 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:13:05.057 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:05.345 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:05.345 = sectsz=512 attr=2, projid32bit=1 00:13:05.345 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:05.345 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:05.345 data = bsize=4096 blocks=130560, imaxpct=25 00:13:05.345 = sunit=0 swidth=0 blks 00:13:05.345 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:05.345 log =internal log bsize=4096 blocks=16384, version=2 00:13:05.345 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:05.345 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:06.278 Discarding blocks...Done. 00:13:06.278 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:13:06.278 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:08.175 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:08.175 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:08.175 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:08.175 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:08.175 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:08.175 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:08.175 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3113116 00:13:08.175 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:08.175 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:08.175 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:08.175 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:08.175 00:13:08.175 real 0m2.995s 00:13:08.175 user 0m0.015s 00:13:08.175 sys 0m0.064s 00:13:08.175 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:08.175 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:08.175 ************************************ 00:13:08.175 END TEST filesystem_xfs 00:13:08.175 ************************************ 00:13:08.175 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:08.433 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:08.433 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:08.433 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.433 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:08.433 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:13:08.433 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:08.433 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.433 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:08.433 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.433 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:13:08.433 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.433 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.433 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:08.433 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.433 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:08.433 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3113116 00:13:08.691 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3113116 ']' 00:13:08.692 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3113116 00:13:08.692 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:13:08.692 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:08.692 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3113116 00:13:08.692 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:08.692 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:08.692 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3113116' 00:13:08.692 killing process with pid 3113116 00:13:08.692 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 3113116 00:13:08.692 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 3113116 00:13:09.260 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:09.260 00:13:09.260 real 0m18.936s 00:13:09.260 user 1m12.763s 00:13:09.260 sys 0m2.506s 00:13:09.260 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:09.260 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:09.260 ************************************ 00:13:09.260 END TEST nvmf_filesystem_no_in_capsule 00:13:09.260 ************************************ 00:13:09.260 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:09.260 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:09.260 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:09.260 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:09.260 ************************************ 00:13:09.260 START TEST nvmf_filesystem_in_capsule 00:13:09.260 ************************************ 00:13:09.260 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:13:09.260 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:09.260 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:09.260 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:09.260 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:09.260 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:09.260 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3115493 00:13:09.260 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:09.260 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3115493 00:13:09.260 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3115493 ']' 00:13:09.260 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.260 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:09.260 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.260 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:09.260 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:09.260 [2024-10-28 15:08:56.092604] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:13:09.260 [2024-10-28 15:08:56.092822] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.520 [2024-10-28 15:08:56.274328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:09.780 [2024-10-28 15:08:56.401856] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.780 [2024-10-28 15:08:56.401960] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.780 [2024-10-28 15:08:56.401997] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.780 [2024-10-28 15:08:56.402028] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.780 [2024-10-28 15:08:56.402054] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.780 [2024-10-28 15:08:56.405642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.780 [2024-10-28 15:08:56.405760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.780 [2024-10-28 15:08:56.405866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:09.780 [2024-10-28 15:08:56.405869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.780 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:09.780 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:13:09.780 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:09.780 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:09.780 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:09.780 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:09.780 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:09.780 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:09.780 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.780 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:09.780 [2024-10-28 15:08:56.573468] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:09.780 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.780 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:09.780 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.780 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.038 Malloc1 00:13:10.038 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.038 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:10.038 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.038 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.038 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.038 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:10.038 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.038 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.038 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.038 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.038 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.038 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.038 [2024-10-28 15:08:56.770996] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.038 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.038 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:10.038 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:13:10.038 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:13:10.038 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:13:10.038 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:13:10.038 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:10.038 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.038 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.039 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.039 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:13:10.039 { 00:13:10.039 "name": "Malloc1", 00:13:10.039 "aliases": [ 00:13:10.039 "8009fccc-35dd-438e-8981-e22f6285147f" 00:13:10.039 ], 00:13:10.039 "product_name": "Malloc disk", 00:13:10.039 "block_size": 512, 00:13:10.039 "num_blocks": 1048576, 00:13:10.039 "uuid": "8009fccc-35dd-438e-8981-e22f6285147f", 00:13:10.039 "assigned_rate_limits": { 00:13:10.039 "rw_ios_per_sec": 0, 00:13:10.039 "rw_mbytes_per_sec": 0, 00:13:10.039 "r_mbytes_per_sec": 0, 00:13:10.039 "w_mbytes_per_sec": 0 00:13:10.039 }, 00:13:10.039 "claimed": true, 00:13:10.039 "claim_type": "exclusive_write", 00:13:10.039 "zoned": false, 00:13:10.039 "supported_io_types": { 00:13:10.039 "read": true, 00:13:10.039 "write": true, 00:13:10.039 "unmap": true, 00:13:10.039 "flush": true, 00:13:10.039 "reset": true, 00:13:10.039 "nvme_admin": false, 00:13:10.039 "nvme_io": false, 00:13:10.039 "nvme_io_md": false, 00:13:10.039 "write_zeroes": true, 00:13:10.039 "zcopy": true, 00:13:10.039 "get_zone_info": false, 00:13:10.039 "zone_management": false, 00:13:10.039 "zone_append": false, 00:13:10.039 "compare": false, 00:13:10.039 "compare_and_write": false, 00:13:10.039 "abort": true, 00:13:10.039 "seek_hole": false, 00:13:10.039 "seek_data": false, 00:13:10.039 "copy": true, 00:13:10.039 "nvme_iov_md": false 00:13:10.039 }, 00:13:10.039 "memory_domains": [ 00:13:10.039 { 00:13:10.039 "dma_device_id": "system", 00:13:10.039 "dma_device_type": 1 00:13:10.039 }, 00:13:10.039 { 00:13:10.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.039 "dma_device_type": 2 00:13:10.039 } 00:13:10.039 ], 00:13:10.039 "driver_specific": {} 00:13:10.039 } 00:13:10.039 ]' 00:13:10.039 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:13:10.039 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:13:10.039 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:13:10.297 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:13:10.297 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:13:10.297 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:13:10.297 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:10.297 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:10.862 15:08:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:10.862 15:08:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:13:10.862 15:08:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:10.862 15:08:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:10.862 15:08:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:13:12.759 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:12.759 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:12.759 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:13.016 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:13.016 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:13.016 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:13:13.016 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:13.016 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:13.016 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:13.016 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:13.016 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:13.016 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:13.016 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:13.016 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:13.016 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:13.016 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:13.016 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:13.016 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:13.947 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:14.879 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:14.879 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:14.879 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:14.879 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:14.879 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:14.879 ************************************ 00:13:14.879 START TEST filesystem_in_capsule_ext4 00:13:14.879 ************************************ 00:13:14.879 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:14.879 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:14.879 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:14.879 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:14.879 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:13:14.879 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:14.879 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:13:14.879 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:13:14.879 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:13:14.879 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:13:14.879 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:14.879 mke2fs 1.47.0 (5-Feb-2023) 00:13:14.879 Discarding device blocks: 0/522240 done 00:13:14.879 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:14.879 Filesystem UUID: 78118f82-8a15-4a5b-81e8-a7eaafd31e9f 00:13:14.879 Superblock backups stored on blocks: 00:13:14.879 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:14.879 00:13:14.879 Allocating group tables: 0/64 done 00:13:14.879 Writing inode tables: 0/64 done 00:13:15.137 Creating journal (8192 blocks): done 00:13:17.024 Writing superblocks and filesystem accounting information: 0/64 done 00:13:17.024 00:13:17.024 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:13:17.024 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:22.283 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:22.283 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:22.283 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:22.283 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:22.283 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:22.283 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:22.540 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3115493 00:13:22.541 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:22.541 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:22.541 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:22.541 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:22.541 00:13:22.541 real 0m7.623s 00:13:22.541 user 0m0.029s 00:13:22.541 sys 0m0.056s 00:13:22.541 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:22.541 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:22.541 ************************************ 00:13:22.541 END TEST filesystem_in_capsule_ext4 00:13:22.541 ************************************ 00:13:22.541 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:22.541 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:22.541 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:22.541 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:22.541 ************************************ 00:13:22.541 START TEST filesystem_in_capsule_btrfs 00:13:22.541 ************************************ 00:13:22.541 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:22.541 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:22.541 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:22.541 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:22.541 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:13:22.541 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:22.541 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:13:22.541 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:13:22.541 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:13:22.541 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:13:22.541 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:22.799 btrfs-progs v6.8.1 00:13:22.799 See https://btrfs.readthedocs.io for more information. 00:13:22.799 00:13:22.799 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:22.799 NOTE: several default settings have changed in version 5.15, please make sure 00:13:22.799 this does not affect your deployments: 00:13:22.799 - DUP for metadata (-m dup) 00:13:22.799 - enabled no-holes (-O no-holes) 00:13:22.799 - enabled free-space-tree (-R free-space-tree) 00:13:22.799 00:13:22.799 Label: (null) 00:13:22.799 UUID: ea264027-44f5-4e68-8f31-c4f17748043f 00:13:22.799 Node size: 16384 00:13:22.799 Sector size: 4096 (CPU page size: 4096) 00:13:22.799 Filesystem size: 510.00MiB 00:13:22.799 Block group profiles: 00:13:22.799 Data: single 8.00MiB 00:13:22.799 Metadata: DUP 32.00MiB 00:13:22.799 System: DUP 8.00MiB 00:13:22.799 SSD detected: yes 00:13:22.799 Zoned device: no 00:13:22.799 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:22.799 Checksum: crc32c 00:13:22.799 Number of devices: 1 00:13:22.799 Devices: 00:13:22.799 ID SIZE PATH 00:13:22.799 1 510.00MiB /dev/nvme0n1p1 00:13:22.799 00:13:22.799 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:13:22.799 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:23.057 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:23.057 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:23.057 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:23.057 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:23.057 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:23.057 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:23.057 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3115493 00:13:23.057 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:23.057 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:23.057 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:23.057 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:23.057 00:13:23.057 real 0m0.658s 00:13:23.057 user 0m0.030s 00:13:23.057 sys 0m0.092s 00:13:23.057 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:23.057 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:23.057 ************************************ 00:13:23.057 END TEST filesystem_in_capsule_btrfs 00:13:23.057 ************************************ 00:13:23.314 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:23.314 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:23.314 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:23.314 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:23.314 ************************************ 00:13:23.314 START TEST filesystem_in_capsule_xfs 00:13:23.314 ************************************ 00:13:23.314 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:13:23.314 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:23.314 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:23.314 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:23.314 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:13:23.314 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:23.314 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:13:23.314 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:13:23.314 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:13:23.314 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:13:23.314 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:23.314 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:23.314 = sectsz=512 attr=2, projid32bit=1 00:13:23.314 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:23.314 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:23.314 data = bsize=4096 blocks=130560, imaxpct=25 00:13:23.314 = sunit=0 swidth=0 blks 00:13:23.314 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:23.314 log =internal log bsize=4096 blocks=16384, version=2 00:13:23.314 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:23.314 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:24.245 Discarding blocks...Done. 00:13:24.245 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:13:24.245 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:26.769 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:26.769 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:26.769 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:26.769 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:26.769 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:26.769 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:26.769 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3115493 00:13:26.769 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:26.769 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:26.769 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:26.769 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:26.769 00:13:26.769 real 0m3.349s 00:13:26.769 user 0m0.017s 00:13:26.769 sys 0m0.071s 00:13:26.769 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:26.769 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:26.769 ************************************ 00:13:26.769 END TEST filesystem_in_capsule_xfs 00:13:26.769 ************************************ 00:13:26.769 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:26.769 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:26.769 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:27.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.027 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:27.027 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:13:27.027 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:27.027 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:27.027 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:27.027 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:27.027 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:13:27.027 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:27.027 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.027 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:27.027 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.027 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:27.027 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3115493 00:13:27.027 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3115493 ']' 00:13:27.027 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3115493 00:13:27.027 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:13:27.027 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:27.027 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3115493 00:13:27.027 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:27.027 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:27.027 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3115493' 00:13:27.027 killing process with pid 3115493 00:13:27.027 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 3115493 00:13:27.027 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 3115493 00:13:27.598 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:27.598 00:13:27.598 real 0m18.296s 00:13:27.598 user 1m10.031s 00:13:27.598 sys 0m2.478s 00:13:27.598 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:27.598 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:27.598 ************************************ 00:13:27.598 END TEST nvmf_filesystem_in_capsule 00:13:27.598 ************************************ 00:13:27.598 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:27.598 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:27.598 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:13:27.598 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:27.598 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:13:27.598 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:27.598 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:27.598 rmmod nvme_tcp 00:13:27.598 rmmod nvme_fabrics 00:13:27.598 rmmod nvme_keyring 00:13:27.598 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:27.598 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:13:27.598 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:13:27.598 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:27.598 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:27.599 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:27.599 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:27.599 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:13:27.599 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:13:27.599 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:13:27.599 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:27.599 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:27.599 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:27.599 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.599 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:27.599 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.140 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:30.140 00:13:30.140 real 0m43.202s 00:13:30.140 user 2m24.255s 00:13:30.140 sys 0m7.530s 00:13:30.140 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:30.141 ************************************ 00:13:30.141 END TEST nvmf_filesystem 00:13:30.141 ************************************ 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:30.141 ************************************ 00:13:30.141 START TEST nvmf_target_discovery 00:13:30.141 ************************************ 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:30.141 * Looking for test storage... 00:13:30.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1689 -- # lcov --version 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:13:30.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.141 --rc genhtml_branch_coverage=1 00:13:30.141 --rc genhtml_function_coverage=1 00:13:30.141 --rc genhtml_legend=1 00:13:30.141 --rc geninfo_all_blocks=1 00:13:30.141 --rc geninfo_unexecuted_blocks=1 00:13:30.141 00:13:30.141 ' 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:13:30.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.141 --rc genhtml_branch_coverage=1 00:13:30.141 --rc genhtml_function_coverage=1 00:13:30.141 --rc genhtml_legend=1 00:13:30.141 --rc geninfo_all_blocks=1 00:13:30.141 --rc geninfo_unexecuted_blocks=1 00:13:30.141 00:13:30.141 ' 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:13:30.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.141 --rc genhtml_branch_coverage=1 00:13:30.141 --rc genhtml_function_coverage=1 00:13:30.141 --rc genhtml_legend=1 00:13:30.141 --rc geninfo_all_blocks=1 00:13:30.141 --rc geninfo_unexecuted_blocks=1 00:13:30.141 00:13:30.141 ' 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:13:30.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.141 --rc genhtml_branch_coverage=1 00:13:30.141 --rc genhtml_function_coverage=1 00:13:30.141 --rc genhtml_legend=1 00:13:30.141 --rc geninfo_all_blocks=1 00:13:30.141 --rc geninfo_unexecuted_blocks=1 00:13:30.141 00:13:30.141 ' 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:30.141 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:30.142 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:30.142 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.142 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.142 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:30.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:30.142 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:30.142 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:30.142 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:30.142 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:30.142 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:30.142 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:30.142 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:30.142 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:30.142 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:30.142 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.142 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:30.142 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:30.142 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:30.142 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.142 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:30.142 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.142 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:30.142 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:30.142 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:13:30.142 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.677 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:32.677 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:13:32.677 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:32.677 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:32.677 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:32.677 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:32.677 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:32.677 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:13:32.677 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:32.677 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:13:32.677 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:13:32.677 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:13:32.677 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:13:32.677 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:13:32.677 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:13:32.677 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:32.677 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:32.677 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:32.677 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:32.677 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:32.677 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:32.677 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:32.677 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:32.677 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:32.677 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:32.677 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:32.677 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:32.677 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:32.677 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:32.677 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:32.677 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:32.678 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:32.678 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:32.678 Found net devices under 0000:84:00.0: cvl_0_0 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:32.678 Found net devices under 0000:84:00.1: cvl_0_1 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:32.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:32.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:13:32.678 00:13:32.678 --- 10.0.0.2 ping statistics --- 00:13:32.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.678 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:32.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:32.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:13:32.678 00:13:32.678 --- 10.0.0.1 ping statistics --- 00:13:32.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.678 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:32.678 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:32.938 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:32.938 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:32.938 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:32.938 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.938 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3120455 00:13:32.938 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:32.938 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3120455 00:13:32.938 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 3120455 ']' 00:13:32.938 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.938 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:32.938 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.938 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:32.938 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.938 [2024-10-28 15:09:19.616831] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:13:32.938 [2024-10-28 15:09:19.616931] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.938 [2024-10-28 15:09:19.762491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:33.195 [2024-10-28 15:09:19.884502] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.195 [2024-10-28 15:09:19.884609] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.195 [2024-10-28 15:09:19.884648] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:33.195 [2024-10-28 15:09:19.884699] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:33.195 [2024-10-28 15:09:19.884726] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.195 [2024-10-28 15:09:19.888207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.195 [2024-10-28 15:09:19.888314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.195 [2024-10-28 15:09:19.888409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:33.195 [2024-10-28 15:09:19.888413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.452 [2024-10-28 15:09:20.115586] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.452 Null1 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.452 [2024-10-28 15:09:20.159920] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.452 Null2 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.452 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.453 Null3 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.453 Null4 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.453 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:13:33.711 00:13:33.711 Discovery Log Number of Records 6, Generation counter 6 00:13:33.711 =====Discovery Log Entry 0====== 00:13:33.711 trtype: tcp 00:13:33.711 adrfam: ipv4 00:13:33.711 subtype: current discovery subsystem 00:13:33.711 treq: not required 00:13:33.711 portid: 0 00:13:33.711 trsvcid: 4420 00:13:33.711 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:33.711 traddr: 10.0.0.2 00:13:33.711 eflags: explicit discovery connections, duplicate discovery information 00:13:33.711 sectype: none 00:13:33.711 =====Discovery Log Entry 1====== 00:13:33.711 trtype: tcp 00:13:33.711 adrfam: ipv4 00:13:33.711 subtype: nvme subsystem 00:13:33.711 treq: not required 00:13:33.711 portid: 0 00:13:33.711 trsvcid: 4420 00:13:33.711 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:33.711 traddr: 10.0.0.2 00:13:33.711 eflags: none 00:13:33.711 sectype: none 00:13:33.711 =====Discovery Log Entry 2====== 00:13:33.711 trtype: tcp 00:13:33.711 adrfam: ipv4 00:13:33.711 subtype: nvme subsystem 00:13:33.711 treq: not required 00:13:33.711 portid: 0 00:13:33.711 trsvcid: 4420 00:13:33.711 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:33.711 traddr: 10.0.0.2 00:13:33.711 eflags: none 00:13:33.711 sectype: none 00:13:33.711 =====Discovery Log Entry 3====== 00:13:33.711 trtype: tcp 00:13:33.711 adrfam: ipv4 00:13:33.711 subtype: nvme subsystem 00:13:33.711 treq: not required 00:13:33.711 portid: 0 00:13:33.711 trsvcid: 4420 00:13:33.711 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:33.711 traddr: 10.0.0.2 00:13:33.711 eflags: none 00:13:33.711 sectype: none 00:13:33.711 =====Discovery Log Entry 4====== 00:13:33.711 trtype: tcp 00:13:33.711 adrfam: ipv4 00:13:33.711 subtype: nvme subsystem 00:13:33.711 treq: not required 00:13:33.711 portid: 0 00:13:33.711 trsvcid: 4420 00:13:33.711 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:33.711 traddr: 10.0.0.2 00:13:33.711 eflags: none 00:13:33.711 sectype: none 00:13:33.711 =====Discovery Log Entry 5====== 00:13:33.711 trtype: tcp 00:13:33.711 adrfam: ipv4 00:13:33.711 subtype: discovery subsystem referral 00:13:33.711 treq: not required 00:13:33.711 portid: 0 00:13:33.711 trsvcid: 4430 00:13:33.711 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:33.711 traddr: 10.0.0.2 00:13:33.711 eflags: none 00:13:33.711 sectype: none 00:13:33.711 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:33.711 Perform nvmf subsystem discovery via RPC 00:13:33.711 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:33.711 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.711 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.711 [ 00:13:33.711 { 00:13:33.711 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:33.711 "subtype": "Discovery", 00:13:33.711 "listen_addresses": [ 00:13:33.711 { 00:13:33.711 "trtype": "TCP", 00:13:33.711 "adrfam": "IPv4", 00:13:33.711 "traddr": "10.0.0.2", 00:13:33.711 "trsvcid": "4420" 00:13:33.711 } 00:13:33.711 ], 00:13:33.711 "allow_any_host": true, 00:13:33.711 "hosts": [] 00:13:33.711 }, 00:13:33.711 { 00:13:33.711 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:33.711 "subtype": "NVMe", 00:13:33.711 "listen_addresses": [ 00:13:33.711 { 00:13:33.711 "trtype": "TCP", 00:13:33.711 "adrfam": "IPv4", 00:13:33.711 "traddr": "10.0.0.2", 00:13:33.711 "trsvcid": "4420" 00:13:33.711 } 00:13:33.711 ], 00:13:33.711 "allow_any_host": true, 00:13:33.711 "hosts": [], 00:13:33.711 "serial_number": "SPDK00000000000001", 00:13:33.711 "model_number": "SPDK bdev Controller", 00:13:33.711 "max_namespaces": 32, 00:13:33.711 "min_cntlid": 1, 00:13:33.711 "max_cntlid": 65519, 00:13:33.711 "namespaces": [ 00:13:33.711 { 00:13:33.711 "nsid": 1, 00:13:33.711 "bdev_name": "Null1", 00:13:33.711 "name": "Null1", 00:13:33.711 "nguid": "404A4133AD26491CBBAE5C6A7C9CAEB9", 00:13:33.711 "uuid": "404a4133-ad26-491c-bbae-5c6a7c9caeb9" 00:13:33.711 } 00:13:33.711 ] 00:13:33.711 }, 00:13:33.711 { 00:13:33.711 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:33.711 "subtype": "NVMe", 00:13:33.711 "listen_addresses": [ 00:13:33.711 { 00:13:33.711 "trtype": "TCP", 00:13:33.711 "adrfam": "IPv4", 00:13:33.711 "traddr": "10.0.0.2", 00:13:33.711 "trsvcid": "4420" 00:13:33.711 } 00:13:33.711 ], 00:13:33.711 "allow_any_host": true, 00:13:33.711 "hosts": [], 00:13:33.711 "serial_number": "SPDK00000000000002", 00:13:33.711 "model_number": "SPDK bdev Controller", 00:13:33.711 "max_namespaces": 32, 00:13:33.711 "min_cntlid": 1, 00:13:33.711 "max_cntlid": 65519, 00:13:33.711 "namespaces": [ 00:13:33.711 { 00:13:33.711 "nsid": 1, 00:13:33.711 "bdev_name": "Null2", 00:13:33.711 "name": "Null2", 00:13:33.711 "nguid": "4F4028854A694988B02432961DDB3353", 00:13:33.711 "uuid": "4f402885-4a69-4988-b024-32961ddb3353" 00:13:33.711 } 00:13:33.711 ] 00:13:33.711 }, 00:13:33.711 { 00:13:33.711 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:33.711 "subtype": "NVMe", 00:13:33.711 "listen_addresses": [ 00:13:33.711 { 00:13:33.711 "trtype": "TCP", 00:13:33.711 "adrfam": "IPv4", 00:13:33.711 "traddr": "10.0.0.2", 00:13:33.711 "trsvcid": "4420" 00:13:33.711 } 00:13:33.711 ], 00:13:33.711 "allow_any_host": true, 00:13:33.711 "hosts": [], 00:13:33.711 "serial_number": "SPDK00000000000003", 00:13:33.711 "model_number": "SPDK bdev Controller", 00:13:33.711 "max_namespaces": 32, 00:13:33.711 "min_cntlid": 1, 00:13:33.711 "max_cntlid": 65519, 00:13:33.711 "namespaces": [ 00:13:33.711 { 00:13:33.711 "nsid": 1, 00:13:33.711 "bdev_name": "Null3", 00:13:33.711 "name": "Null3", 00:13:33.711 "nguid": "5F0299B791F043BB83B154A4C8F04FE9", 00:13:33.711 "uuid": "5f0299b7-91f0-43bb-83b1-54a4c8f04fe9" 00:13:33.711 } 00:13:33.711 ] 00:13:33.711 }, 00:13:33.711 { 00:13:33.711 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:33.711 "subtype": "NVMe", 00:13:33.711 "listen_addresses": [ 00:13:33.711 { 00:13:33.711 "trtype": "TCP", 00:13:33.711 "adrfam": "IPv4", 00:13:33.711 "traddr": "10.0.0.2", 00:13:33.711 "trsvcid": "4420" 00:13:33.711 } 00:13:33.711 ], 00:13:33.711 "allow_any_host": true, 00:13:33.711 "hosts": [], 00:13:33.711 "serial_number": "SPDK00000000000004", 00:13:33.711 "model_number": "SPDK bdev Controller", 00:13:33.711 "max_namespaces": 32, 00:13:33.711 "min_cntlid": 1, 00:13:33.711 "max_cntlid": 65519, 00:13:33.711 "namespaces": [ 00:13:33.711 { 00:13:33.711 "nsid": 1, 00:13:33.711 "bdev_name": "Null4", 00:13:33.711 "name": "Null4", 00:13:33.711 "nguid": "50895BBDEEE94C3D955ACD4AF3F2EEE1", 00:13:33.711 "uuid": "50895bbd-eee9-4c3d-955a-cd4af3f2eee1" 00:13:33.711 } 00:13:33.711 ] 00:13:33.711 } 00:13:33.711 ] 00:13:33.711 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.711 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:33.711 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:33.711 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:33.711 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.711 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.711 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.711 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:33.712 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.712 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.712 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.712 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:33.712 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:33.712 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.712 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.712 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.712 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:33.712 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.712 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.712 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.712 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:33.712 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:33.712 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.712 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.712 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.712 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:33.712 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.712 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.712 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.712 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:33.712 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:33.712 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.712 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.712 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.712 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:33.712 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.712 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:33.971 rmmod nvme_tcp 00:13:33.971 rmmod nvme_fabrics 00:13:33.971 rmmod nvme_keyring 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3120455 ']' 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3120455 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 3120455 ']' 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 3120455 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3120455 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3120455' 00:13:33.971 killing process with pid 3120455 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 3120455 00:13:33.971 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 3120455 00:13:34.230 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:34.230 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:34.230 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:34.230 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:13:34.230 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:13:34.230 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:34.230 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:13:34.230 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:34.230 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:34.230 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.230 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:34.230 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.762 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:36.762 00:13:36.762 real 0m6.648s 00:13:36.762 user 0m5.719s 00:13:36.762 sys 0m2.621s 00:13:36.762 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:36.762 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:36.762 ************************************ 00:13:36.762 END TEST nvmf_target_discovery 00:13:36.762 ************************************ 00:13:36.762 15:09:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:36.762 15:09:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:36.762 15:09:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:36.762 15:09:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:36.762 ************************************ 00:13:36.762 START TEST nvmf_referrals 00:13:36.762 ************************************ 00:13:36.762 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:36.762 * Looking for test storage... 00:13:36.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:36.762 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:13:36.762 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1689 -- # lcov --version 00:13:36.762 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:13:36.762 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:13:36.762 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:36.762 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:36.762 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:36.762 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:13:36.762 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:13:36.762 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:13:36.762 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:13:36.762 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:13:36.762 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:13:36.762 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:13:36.762 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:36.762 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:13:36.762 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:13:36.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.763 --rc genhtml_branch_coverage=1 00:13:36.763 --rc genhtml_function_coverage=1 00:13:36.763 --rc genhtml_legend=1 00:13:36.763 --rc geninfo_all_blocks=1 00:13:36.763 --rc geninfo_unexecuted_blocks=1 00:13:36.763 00:13:36.763 ' 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:13:36.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.763 --rc genhtml_branch_coverage=1 00:13:36.763 --rc genhtml_function_coverage=1 00:13:36.763 --rc genhtml_legend=1 00:13:36.763 --rc geninfo_all_blocks=1 00:13:36.763 --rc geninfo_unexecuted_blocks=1 00:13:36.763 00:13:36.763 ' 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:13:36.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.763 --rc genhtml_branch_coverage=1 00:13:36.763 --rc genhtml_function_coverage=1 00:13:36.763 --rc genhtml_legend=1 00:13:36.763 --rc geninfo_all_blocks=1 00:13:36.763 --rc geninfo_unexecuted_blocks=1 00:13:36.763 00:13:36.763 ' 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:13:36.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.763 --rc genhtml_branch_coverage=1 00:13:36.763 --rc genhtml_function_coverage=1 00:13:36.763 --rc genhtml_legend=1 00:13:36.763 --rc geninfo_all_blocks=1 00:13:36.763 --rc geninfo_unexecuted_blocks=1 00:13:36.763 00:13:36.763 ' 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:36.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:36.763 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:36.764 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:36.764 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:36.764 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:36.764 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:36.764 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.764 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:36.764 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:36.764 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:36.764 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.764 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:36.764 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.764 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:36.764 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:36.764 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:13:36.764 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:40.044 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:40.044 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:13:40.044 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:40.044 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:40.045 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:40.045 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:40.045 Found net devices under 0000:84:00.0: cvl_0_0 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:40.045 Found net devices under 0000:84:00.1: cvl_0_1 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:40.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:40.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:13:40.045 00:13:40.045 --- 10.0.0.2 ping statistics --- 00:13:40.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.045 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:13:40.045 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:40.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:40.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:13:40.046 00:13:40.046 --- 10.0.0.1 ping statistics --- 00:13:40.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.046 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:13:40.046 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:40.046 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:13:40.046 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:40.046 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:40.046 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:40.046 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:40.046 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:40.046 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:40.046 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:40.046 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:40.046 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:40.046 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:40.046 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:40.046 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3122660 00:13:40.046 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:40.046 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3122660 00:13:40.046 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 3122660 ']' 00:13:40.046 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.046 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:40.046 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.046 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:40.046 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:40.046 [2024-10-28 15:09:26.517619] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:13:40.046 [2024-10-28 15:09:26.517831] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.046 [2024-10-28 15:09:26.730875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:40.046 [2024-10-28 15:09:26.890932] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.046 [2024-10-28 15:09:26.891050] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.046 [2024-10-28 15:09:26.891116] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:40.046 [2024-10-28 15:09:26.891163] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:40.046 [2024-10-28 15:09:26.891202] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.046 [2024-10-28 15:09:26.895792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.046 [2024-10-28 15:09:26.895906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.046 [2024-10-28 15:09:26.896011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.046 [2024-10-28 15:09:26.896020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:40.304 [2024-10-28 15:09:27.074962] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:40.304 [2024-10-28 15:09:27.087313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:40.304 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.562 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:40.562 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:40.562 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:40.562 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:40.562 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:40.562 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:40.562 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:40.562 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:40.562 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:40.562 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:40.562 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:40.562 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.562 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:40.562 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.562 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:40.562 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.562 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:40.819 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.819 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:40.819 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.819 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:40.819 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.819 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:40.819 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:40.819 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.819 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:40.819 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.819 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:40.819 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:40.819 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:40.819 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:40.820 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:40.820 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:40.820 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:41.077 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:41.077 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:41.077 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:41.077 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.077 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:41.077 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.077 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:41.077 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.077 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:41.077 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.077 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:41.077 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:41.077 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:41.077 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:41.077 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.077 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:41.077 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:41.077 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.077 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:41.078 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:41.078 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:41.078 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:41.078 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:41.078 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:41.078 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:41.078 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:41.335 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:41.336 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:41.336 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:41.336 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:41.336 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:41.336 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:41.336 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:41.336 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:41.336 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:41.336 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:41.336 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:41.336 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:41.336 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:41.593 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:41.593 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:41.593 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.593 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:41.593 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.593 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:41.593 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:41.594 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:41.594 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:41.594 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.594 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:41.594 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:41.594 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.594 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:41.594 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:41.594 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:41.594 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:41.594 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:41.594 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:41.594 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:41.594 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:41.852 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:41.852 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:41.852 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:41.852 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:41.852 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:41.852 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:41.852 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:41.852 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:41.852 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:42.110 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:42.110 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:42.110 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:42.110 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:42.110 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:42.110 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:42.110 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.110 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:42.110 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.110 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:42.110 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:42.110 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.110 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:42.110 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.110 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:42.110 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:42.110 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:42.110 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:42.110 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:42.110 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:42.110 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:42.368 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:42.368 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:42.369 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:42.369 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:42.369 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:42.369 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:42.369 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:42.369 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:42.369 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:42.369 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:42.369 rmmod nvme_tcp 00:13:42.369 rmmod nvme_fabrics 00:13:42.369 rmmod nvme_keyring 00:13:42.369 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:42.369 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:42.369 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:42.369 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3122660 ']' 00:13:42.369 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3122660 00:13:42.369 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 3122660 ']' 00:13:42.369 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 3122660 00:13:42.369 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:13:42.369 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:42.369 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3122660 00:13:42.627 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:42.627 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:42.627 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3122660' 00:13:42.627 killing process with pid 3122660 00:13:42.627 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 3122660 00:13:42.627 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 3122660 00:13:42.937 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:42.937 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:42.937 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:42.937 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:13:42.937 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:13:42.937 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:42.937 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:13:42.937 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:42.937 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:42.937 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.937 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:42.937 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.876 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:44.877 00:13:44.877 real 0m8.434s 00:13:44.877 user 0m12.687s 00:13:44.877 sys 0m3.072s 00:13:44.877 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:44.877 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:44.877 ************************************ 00:13:44.877 END TEST nvmf_referrals 00:13:44.877 ************************************ 00:13:44.877 15:09:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:44.877 15:09:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:44.877 15:09:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:44.877 15:09:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:44.877 ************************************ 00:13:44.877 START TEST nvmf_connect_disconnect 00:13:44.877 ************************************ 00:13:44.877 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:44.877 * Looking for test storage... 00:13:45.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1689 -- # lcov --version 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:13:45.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.137 --rc genhtml_branch_coverage=1 00:13:45.137 --rc genhtml_function_coverage=1 00:13:45.137 --rc genhtml_legend=1 00:13:45.137 --rc geninfo_all_blocks=1 00:13:45.137 --rc geninfo_unexecuted_blocks=1 00:13:45.137 00:13:45.137 ' 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:13:45.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.137 --rc genhtml_branch_coverage=1 00:13:45.137 --rc genhtml_function_coverage=1 00:13:45.137 --rc genhtml_legend=1 00:13:45.137 --rc geninfo_all_blocks=1 00:13:45.137 --rc geninfo_unexecuted_blocks=1 00:13:45.137 00:13:45.137 ' 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:13:45.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.137 --rc genhtml_branch_coverage=1 00:13:45.137 --rc genhtml_function_coverage=1 00:13:45.137 --rc genhtml_legend=1 00:13:45.137 --rc geninfo_all_blocks=1 00:13:45.137 --rc geninfo_unexecuted_blocks=1 00:13:45.137 00:13:45.137 ' 00:13:45.137 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:13:45.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.138 --rc genhtml_branch_coverage=1 00:13:45.138 --rc genhtml_function_coverage=1 00:13:45.138 --rc genhtml_legend=1 00:13:45.138 --rc geninfo_all_blocks=1 00:13:45.138 --rc geninfo_unexecuted_blocks=1 00:13:45.138 00:13:45.138 ' 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:45.138 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:45.138 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:48.430 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:48.430 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:48.430 Found net devices under 0000:84:00.0: cvl_0_0 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:48.430 Found net devices under 0000:84:00.1: cvl_0_1 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.430 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:48.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:48.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:13:48.431 00:13:48.431 --- 10.0.0.2 ping statistics --- 00:13:48.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.431 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:48.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:48.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:13:48.431 00:13:48.431 --- 10.0.0.1 ping statistics --- 00:13:48.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.431 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:48.431 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:48.431 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:48.431 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:48.431 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:48.431 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:48.431 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3125229 00:13:48.431 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:48.431 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3125229 00:13:48.431 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 3125229 ']' 00:13:48.431 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.431 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:48.431 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.431 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:48.431 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:48.431 [2024-10-28 15:09:35.089891] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:13:48.431 [2024-10-28 15:09:35.089993] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.431 [2024-10-28 15:09:35.229249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:48.690 [2024-10-28 15:09:35.351450] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:48.690 [2024-10-28 15:09:35.351582] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:48.690 [2024-10-28 15:09:35.351619] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:48.690 [2024-10-28 15:09:35.351661] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:48.690 [2024-10-28 15:09:35.351692] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:48.690 [2024-10-28 15:09:35.355165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.690 [2024-10-28 15:09:35.355266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.690 [2024-10-28 15:09:35.355356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:48.690 [2024-10-28 15:09:35.355360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.624 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:49.624 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:13:49.624 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:49.624 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:49.624 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:49.624 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.624 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:49.624 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.624 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:49.624 [2024-10-28 15:09:36.477467] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:49.624 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.624 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:49.624 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.624 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:49.881 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.881 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:49.881 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:49.881 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.881 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:49.881 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.881 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:49.881 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.881 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:49.881 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.881 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:49.881 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.881 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:49.881 [2024-10-28 15:09:36.545156] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.881 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.881 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:13:49.881 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:13:49.881 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:52.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.035 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:04.035 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:04.035 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:04.035 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:14:04.035 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:04.035 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:14:04.035 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:04.035 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:04.035 rmmod nvme_tcp 00:14:04.035 rmmod nvme_fabrics 00:14:04.035 rmmod nvme_keyring 00:14:04.035 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:04.035 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:14:04.035 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:14:04.035 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3125229 ']' 00:14:04.035 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3125229 00:14:04.035 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3125229 ']' 00:14:04.035 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 3125229 00:14:04.035 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:14:04.035 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:04.035 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3125229 00:14:04.035 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:04.035 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:04.035 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3125229' 00:14:04.035 killing process with pid 3125229 00:14:04.035 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 3125229 00:14:04.035 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 3125229 00:14:04.035 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:04.035 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:04.035 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:04.035 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:14:04.035 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:14:04.035 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:04.035 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:14:04.296 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:04.296 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:04.296 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.296 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:04.296 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.204 15:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:06.204 00:14:06.204 real 0m21.281s 00:14:06.204 user 1m2.259s 00:14:06.204 sys 0m4.464s 00:14:06.204 15:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:06.204 15:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:06.204 ************************************ 00:14:06.204 END TEST nvmf_connect_disconnect 00:14:06.204 ************************************ 00:14:06.204 15:09:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:06.204 15:09:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:06.204 15:09:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:06.204 15:09:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:06.204 ************************************ 00:14:06.205 START TEST nvmf_multitarget 00:14:06.205 ************************************ 00:14:06.205 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:06.465 * Looking for test storage... 00:14:06.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:06.465 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:14:06.465 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1689 -- # lcov --version 00:14:06.465 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:14:06.465 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:14:06.465 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:06.465 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:06.465 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:06.465 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:14:06.465 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:14:06.465 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:14:06.465 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:14:06.465 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:14:06.465 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:14:06.465 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:14:06.465 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:06.465 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:14:06.465 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:14:06.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.726 --rc genhtml_branch_coverage=1 00:14:06.726 --rc genhtml_function_coverage=1 00:14:06.726 --rc genhtml_legend=1 00:14:06.726 --rc geninfo_all_blocks=1 00:14:06.726 --rc geninfo_unexecuted_blocks=1 00:14:06.726 00:14:06.726 ' 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:14:06.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.726 --rc genhtml_branch_coverage=1 00:14:06.726 --rc genhtml_function_coverage=1 00:14:06.726 --rc genhtml_legend=1 00:14:06.726 --rc geninfo_all_blocks=1 00:14:06.726 --rc geninfo_unexecuted_blocks=1 00:14:06.726 00:14:06.726 ' 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:14:06.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.726 --rc genhtml_branch_coverage=1 00:14:06.726 --rc genhtml_function_coverage=1 00:14:06.726 --rc genhtml_legend=1 00:14:06.726 --rc geninfo_all_blocks=1 00:14:06.726 --rc geninfo_unexecuted_blocks=1 00:14:06.726 00:14:06.726 ' 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:14:06.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.726 --rc genhtml_branch_coverage=1 00:14:06.726 --rc genhtml_function_coverage=1 00:14:06.726 --rc genhtml_legend=1 00:14:06.726 --rc geninfo_all_blocks=1 00:14:06.726 --rc geninfo_unexecuted_blocks=1 00:14:06.726 00:14:06.726 ' 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:06.726 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.727 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.727 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.727 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:06.727 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.727 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:14:06.727 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:06.727 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:06.727 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:06.727 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:06.727 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:06.727 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:06.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:06.727 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:06.727 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:06.727 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:06.727 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:06.727 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:06.727 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:06.727 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:06.727 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:06.727 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:06.727 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:06.727 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.727 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.727 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.727 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:06.727 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:06.727 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:14:06.727 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:10.023 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:10.023 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:10.023 Found net devices under 0000:84:00.0: cvl_0_0 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:10.023 Found net devices under 0000:84:00.1: cvl_0_1 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:10.023 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:10.024 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:10.024 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:10.024 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:10.024 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:10.024 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:10.024 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:10.024 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:10.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:14:10.024 00:14:10.024 --- 10.0.0.2 ping statistics --- 00:14:10.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.024 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:14:10.024 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:10.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:14:10.024 00:14:10.024 --- 10.0.0.1 ping statistics --- 00:14:10.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.024 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:14:10.024 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.024 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:14:10.024 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:10.024 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.024 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:10.024 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:10.024 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.024 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:10.024 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:10.024 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:10.024 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:10.024 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:10.024 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:10.024 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3129142 00:14:10.024 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3129142 00:14:10.024 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 3129142 ']' 00:14:10.024 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.024 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:10.024 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:10.024 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.024 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:10.024 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:10.024 [2024-10-28 15:09:56.586088] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:14:10.024 [2024-10-28 15:09:56.586257] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.024 [2024-10-28 15:09:56.771604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:10.283 [2024-10-28 15:09:56.894488] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.283 [2024-10-28 15:09:56.894587] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.283 [2024-10-28 15:09:56.894624] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.283 [2024-10-28 15:09:56.894670] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.283 [2024-10-28 15:09:56.894700] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.283 [2024-10-28 15:09:56.898169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.283 [2024-10-28 15:09:56.898271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.283 [2024-10-28 15:09:56.898364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:10.283 [2024-10-28 15:09:56.898368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.283 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:10.283 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:14:10.283 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:10.283 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:10.283 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:10.283 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.283 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:10.283 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:10.283 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:10.540 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:10.540 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:10.798 "nvmf_tgt_1" 00:14:10.798 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:11.055 "nvmf_tgt_2" 00:14:11.055 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:11.055 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:11.313 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:11.313 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:11.313 true 00:14:11.313 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:11.570 true 00:14:11.570 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:11.570 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:11.827 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:11.827 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:11.827 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:11.827 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:11.827 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:14:11.827 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:11.827 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:14:11.827 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:11.827 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:11.827 rmmod nvme_tcp 00:14:11.827 rmmod nvme_fabrics 00:14:11.827 rmmod nvme_keyring 00:14:12.086 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:12.086 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:14:12.086 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:14:12.086 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3129142 ']' 00:14:12.086 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3129142 00:14:12.086 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 3129142 ']' 00:14:12.086 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 3129142 00:14:12.086 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:14:12.086 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:12.086 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3129142 00:14:12.086 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:12.086 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:12.086 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3129142' 00:14:12.086 killing process with pid 3129142 00:14:12.086 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 3129142 00:14:12.087 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 3129142 00:14:12.346 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:12.346 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:12.346 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:12.346 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:14:12.346 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:14:12.346 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:12.346 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:14:12.346 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:12.346 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:12.346 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.346 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:12.346 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:14.883 00:14:14.883 real 0m8.110s 00:14:14.883 user 0m11.461s 00:14:14.883 sys 0m3.087s 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:14.883 ************************************ 00:14:14.883 END TEST nvmf_multitarget 00:14:14.883 ************************************ 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:14.883 ************************************ 00:14:14.883 START TEST nvmf_rpc 00:14:14.883 ************************************ 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:14.883 * Looking for test storage... 00:14:14.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:14:14.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.883 --rc genhtml_branch_coverage=1 00:14:14.883 --rc genhtml_function_coverage=1 00:14:14.883 --rc genhtml_legend=1 00:14:14.883 --rc geninfo_all_blocks=1 00:14:14.883 --rc geninfo_unexecuted_blocks=1 00:14:14.883 00:14:14.883 ' 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:14:14.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.883 --rc genhtml_branch_coverage=1 00:14:14.883 --rc genhtml_function_coverage=1 00:14:14.883 --rc genhtml_legend=1 00:14:14.883 --rc geninfo_all_blocks=1 00:14:14.883 --rc geninfo_unexecuted_blocks=1 00:14:14.883 00:14:14.883 ' 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:14:14.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.883 --rc genhtml_branch_coverage=1 00:14:14.883 --rc genhtml_function_coverage=1 00:14:14.883 --rc genhtml_legend=1 00:14:14.883 --rc geninfo_all_blocks=1 00:14:14.883 --rc geninfo_unexecuted_blocks=1 00:14:14.883 00:14:14.883 ' 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:14:14.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.883 --rc genhtml_branch_coverage=1 00:14:14.883 --rc genhtml_function_coverage=1 00:14:14.883 --rc genhtml_legend=1 00:14:14.883 --rc geninfo_all_blocks=1 00:14:14.883 --rc geninfo_unexecuted_blocks=1 00:14:14.883 00:14:14.883 ' 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:14.883 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:14.884 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:14.884 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.884 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.884 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.884 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:14.884 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.884 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:14:14.884 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:14.884 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:14.884 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:14.884 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:14.884 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:14.884 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:14.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:14.884 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:14.884 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:14.884 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:14.884 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:14.884 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:14.884 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:14.884 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:14.884 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:14.884 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:14.884 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:14.884 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.884 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:14.884 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.884 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:14.884 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:14.884 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:14:14.884 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:18.174 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:18.174 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:18.174 Found net devices under 0000:84:00.0: cvl_0_0 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:18.174 Found net devices under 0000:84:00.1: cvl_0_1 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:18.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:18.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:14:18.174 00:14:18.174 --- 10.0.0.2 ping statistics --- 00:14:18.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.174 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:18.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:18.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:14:18.174 00:14:18.174 --- 10.0.0.1 ping statistics --- 00:14:18.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.174 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:18.174 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:18.175 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:18.175 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:18.175 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:18.175 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:18.175 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:18.175 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:18.175 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.175 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3131520 00:14:18.175 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:18.175 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3131520 00:14:18.175 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 3131520 ']' 00:14:18.175 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.175 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:18.175 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.175 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:18.175 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.175 [2024-10-28 15:10:04.597979] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:14:18.175 [2024-10-28 15:10:04.598148] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.175 [2024-10-28 15:10:04.783843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:18.175 [2024-10-28 15:10:04.903140] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.175 [2024-10-28 15:10:04.903260] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.175 [2024-10-28 15:10:04.903297] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:18.175 [2024-10-28 15:10:04.903328] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:18.175 [2024-10-28 15:10:04.903355] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.175 [2024-10-28 15:10:04.906944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.175 [2024-10-28 15:10:04.907043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:18.175 [2024-10-28 15:10:04.907134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:18.175 [2024-10-28 15:10:04.907138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.433 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:18.433 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:14:18.433 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:18.433 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:18.433 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.433 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.433 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:18.433 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.433 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.433 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.433 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:18.433 "tick_rate": 2700000000, 00:14:18.433 "poll_groups": [ 00:14:18.433 { 00:14:18.433 "name": "nvmf_tgt_poll_group_000", 00:14:18.433 "admin_qpairs": 0, 00:14:18.433 "io_qpairs": 0, 00:14:18.433 "current_admin_qpairs": 0, 00:14:18.433 "current_io_qpairs": 0, 00:14:18.433 "pending_bdev_io": 0, 00:14:18.433 "completed_nvme_io": 0, 00:14:18.433 "transports": [] 00:14:18.433 }, 00:14:18.433 { 00:14:18.433 "name": "nvmf_tgt_poll_group_001", 00:14:18.433 "admin_qpairs": 0, 00:14:18.433 "io_qpairs": 0, 00:14:18.433 "current_admin_qpairs": 0, 00:14:18.433 "current_io_qpairs": 0, 00:14:18.433 "pending_bdev_io": 0, 00:14:18.433 "completed_nvme_io": 0, 00:14:18.433 "transports": [] 00:14:18.433 }, 00:14:18.433 { 00:14:18.433 "name": "nvmf_tgt_poll_group_002", 00:14:18.433 "admin_qpairs": 0, 00:14:18.433 "io_qpairs": 0, 00:14:18.433 "current_admin_qpairs": 0, 00:14:18.433 "current_io_qpairs": 0, 00:14:18.433 "pending_bdev_io": 0, 00:14:18.433 "completed_nvme_io": 0, 00:14:18.433 "transports": [] 00:14:18.433 }, 00:14:18.433 { 00:14:18.433 "name": "nvmf_tgt_poll_group_003", 00:14:18.433 "admin_qpairs": 0, 00:14:18.433 "io_qpairs": 0, 00:14:18.433 "current_admin_qpairs": 0, 00:14:18.433 "current_io_qpairs": 0, 00:14:18.433 "pending_bdev_io": 0, 00:14:18.433 "completed_nvme_io": 0, 00:14:18.433 "transports": [] 00:14:18.433 } 00:14:18.433 ] 00:14:18.433 }' 00:14:18.433 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:18.433 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:18.433 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:18.433 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:18.433 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:18.433 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:18.433 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:18.433 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:18.433 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.433 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.433 [2024-10-28 15:10:05.285083] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:18.433 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.433 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:18.433 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.433 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:18.692 "tick_rate": 2700000000, 00:14:18.692 "poll_groups": [ 00:14:18.692 { 00:14:18.692 "name": "nvmf_tgt_poll_group_000", 00:14:18.692 "admin_qpairs": 0, 00:14:18.692 "io_qpairs": 0, 00:14:18.692 "current_admin_qpairs": 0, 00:14:18.692 "current_io_qpairs": 0, 00:14:18.692 "pending_bdev_io": 0, 00:14:18.692 "completed_nvme_io": 0, 00:14:18.692 "transports": [ 00:14:18.692 { 00:14:18.692 "trtype": "TCP" 00:14:18.692 } 00:14:18.692 ] 00:14:18.692 }, 00:14:18.692 { 00:14:18.692 "name": "nvmf_tgt_poll_group_001", 00:14:18.692 "admin_qpairs": 0, 00:14:18.692 "io_qpairs": 0, 00:14:18.692 "current_admin_qpairs": 0, 00:14:18.692 "current_io_qpairs": 0, 00:14:18.692 "pending_bdev_io": 0, 00:14:18.692 "completed_nvme_io": 0, 00:14:18.692 "transports": [ 00:14:18.692 { 00:14:18.692 "trtype": "TCP" 00:14:18.692 } 00:14:18.692 ] 00:14:18.692 }, 00:14:18.692 { 00:14:18.692 "name": "nvmf_tgt_poll_group_002", 00:14:18.692 "admin_qpairs": 0, 00:14:18.692 "io_qpairs": 0, 00:14:18.692 "current_admin_qpairs": 0, 00:14:18.692 "current_io_qpairs": 0, 00:14:18.692 "pending_bdev_io": 0, 00:14:18.692 "completed_nvme_io": 0, 00:14:18.692 "transports": [ 00:14:18.692 { 00:14:18.692 "trtype": "TCP" 00:14:18.692 } 00:14:18.692 ] 00:14:18.692 }, 00:14:18.692 { 00:14:18.692 "name": "nvmf_tgt_poll_group_003", 00:14:18.692 "admin_qpairs": 0, 00:14:18.692 "io_qpairs": 0, 00:14:18.692 "current_admin_qpairs": 0, 00:14:18.692 "current_io_qpairs": 0, 00:14:18.692 "pending_bdev_io": 0, 00:14:18.692 "completed_nvme_io": 0, 00:14:18.692 "transports": [ 00:14:18.692 { 00:14:18.692 "trtype": "TCP" 00:14:18.692 } 00:14:18.692 ] 00:14:18.692 } 00:14:18.692 ] 00:14:18.692 }' 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.692 Malloc1 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.692 [2024-10-28 15:10:05.467685] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:18.692 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:14:18.693 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:18.693 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:18.693 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:18.693 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:14:18.693 [2024-10-28 15:10:05.500577] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:14:18.693 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:18.693 could not add new controller: failed to write to nvme-fabrics device 00:14:18.693 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:14:18.693 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:18.693 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:18.693 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:18.693 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:18.693 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.693 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.693 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.693 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:19.625 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:19.625 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:19.625 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:19.625 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:19.625 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:21.522 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:21.522 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:21.522 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:21.522 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:21.522 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:21.522 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:21.522 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:21.522 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.522 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:21.522 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:21.522 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:21.522 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:21.522 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:21.522 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:21.522 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:21.522 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:21.522 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.522 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.522 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.522 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:21.522 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:14:21.522 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:21.522 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:21.522 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:21.522 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:14:21.522 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:21.522 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:14:21.522 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:21.522 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:21.522 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:21.522 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:21.522 [2024-10-28 15:10:08.380390] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:14:21.780 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:21.780 could not add new controller: failed to write to nvme-fabrics device 00:14:21.780 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:14:21.780 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:21.780 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:21.780 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:21.780 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:21.780 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.780 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.780 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.780 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:22.345 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:22.345 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:22.345 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:22.345 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:22.345 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:24.241 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:24.241 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:24.241 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:24.241 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:24.241 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:24.241 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:24.241 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:24.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.499 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:24.499 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:24.499 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:24.499 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:24.499 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:24.499 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:24.499 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:24.499 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:24.499 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.499 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.499 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.499 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:24.499 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:24.499 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:24.499 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.499 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.499 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.499 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:24.499 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.499 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.499 [2024-10-28 15:10:11.155755] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:24.499 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.500 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:24.500 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.500 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.500 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.500 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:24.500 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.500 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.500 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.500 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:25.144 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:25.144 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:25.144 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:25.144 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:25.144 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:27.084 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:27.084 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:27.084 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:27.084 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:27.084 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:27.084 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:27.084 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:27.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.084 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:27.084 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:27.084 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:27.084 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:27.084 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:27.084 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:27.084 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:27.084 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:27.084 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.084 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.084 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.084 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:27.084 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.084 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.084 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.084 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:27.084 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:27.084 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.084 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.342 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.342 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:27.342 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.342 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.342 [2024-10-28 15:10:13.960422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:27.342 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.342 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:27.342 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.342 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.342 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.342 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:27.342 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.342 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.342 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.342 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:27.907 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:27.907 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:27.907 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:27.907 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:27.907 15:10:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:30.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.431 [2024-10-28 15:10:16.800437] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.431 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:30.688 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:30.688 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:30.688 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:30.688 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:30.688 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:32.582 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:32.582 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:32.582 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:32.582 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:32.582 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:32.582 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:32.582 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:32.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.841 [2024-10-28 15:10:19.591748] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.841 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:33.406 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:33.406 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:33.406 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:33.406 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:33.406 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:35.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.934 [2024-10-28 15:10:22.434206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.934 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:36.499 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:36.499 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:36.499 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:36.499 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:36.499 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:38.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.399 [2024-10-28 15:10:25.250533] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.399 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.657 [2024-10-28 15:10:25.298578] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.657 [2024-10-28 15:10:25.346780] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.657 [2024-10-28 15:10:25.394929] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.657 [2024-10-28 15:10:25.443110] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.657 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.658 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:38.658 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.658 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.658 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.658 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:38.658 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.658 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.658 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.658 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.658 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.658 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.658 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.658 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:38.658 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.658 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.658 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.658 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:38.658 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.658 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.658 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.658 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:38.658 "tick_rate": 2700000000, 00:14:38.658 "poll_groups": [ 00:14:38.658 { 00:14:38.658 "name": "nvmf_tgt_poll_group_000", 00:14:38.658 "admin_qpairs": 2, 00:14:38.658 "io_qpairs": 84, 00:14:38.658 "current_admin_qpairs": 0, 00:14:38.658 "current_io_qpairs": 0, 00:14:38.658 "pending_bdev_io": 0, 00:14:38.658 "completed_nvme_io": 184, 00:14:38.658 "transports": [ 00:14:38.658 { 00:14:38.658 "trtype": "TCP" 00:14:38.658 } 00:14:38.658 ] 00:14:38.658 }, 00:14:38.658 { 00:14:38.658 "name": "nvmf_tgt_poll_group_001", 00:14:38.658 "admin_qpairs": 2, 00:14:38.658 "io_qpairs": 84, 00:14:38.658 "current_admin_qpairs": 0, 00:14:38.658 "current_io_qpairs": 0, 00:14:38.658 "pending_bdev_io": 0, 00:14:38.658 "completed_nvme_io": 184, 00:14:38.658 "transports": [ 00:14:38.658 { 00:14:38.658 "trtype": "TCP" 00:14:38.658 } 00:14:38.658 ] 00:14:38.658 }, 00:14:38.658 { 00:14:38.658 "name": "nvmf_tgt_poll_group_002", 00:14:38.658 "admin_qpairs": 1, 00:14:38.658 "io_qpairs": 84, 00:14:38.658 "current_admin_qpairs": 0, 00:14:38.658 "current_io_qpairs": 0, 00:14:38.658 "pending_bdev_io": 0, 00:14:38.658 "completed_nvme_io": 135, 00:14:38.658 "transports": [ 00:14:38.658 { 00:14:38.658 "trtype": "TCP" 00:14:38.658 } 00:14:38.658 ] 00:14:38.658 }, 00:14:38.658 { 00:14:38.658 "name": "nvmf_tgt_poll_group_003", 00:14:38.658 "admin_qpairs": 2, 00:14:38.658 "io_qpairs": 84, 00:14:38.658 "current_admin_qpairs": 0, 00:14:38.658 "current_io_qpairs": 0, 00:14:38.658 "pending_bdev_io": 0, 00:14:38.658 "completed_nvme_io": 183, 00:14:38.658 "transports": [ 00:14:38.658 { 00:14:38.658 "trtype": "TCP" 00:14:38.658 } 00:14:38.658 ] 00:14:38.658 } 00:14:38.658 ] 00:14:38.658 }' 00:14:38.658 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:38.658 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:38.658 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:38.658 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:38.916 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:38.916 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:38.916 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:38.916 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:38.916 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:38.916 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:14:38.916 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:38.916 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:38.916 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:38.916 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:38.916 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:14:38.916 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:38.916 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:14:38.916 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:38.916 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:38.916 rmmod nvme_tcp 00:14:38.916 rmmod nvme_fabrics 00:14:38.916 rmmod nvme_keyring 00:14:38.916 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:38.916 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:14:38.916 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:14:38.916 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3131520 ']' 00:14:38.916 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3131520 00:14:38.916 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 3131520 ']' 00:14:38.916 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 3131520 00:14:38.916 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:14:38.916 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:38.916 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3131520 00:14:38.916 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:38.916 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:38.916 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3131520' 00:14:38.916 killing process with pid 3131520 00:14:38.916 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 3131520 00:14:38.916 15:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 3131520 00:14:39.486 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:39.486 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:39.486 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:39.486 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:14:39.486 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:14:39.486 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:39.486 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:14:39.486 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:39.486 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:39.486 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.486 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:39.486 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.396 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:41.396 00:14:41.396 real 0m26.889s 00:14:41.396 user 1m24.347s 00:14:41.396 sys 0m5.232s 00:14:41.396 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:41.396 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.396 ************************************ 00:14:41.396 END TEST nvmf_rpc 00:14:41.396 ************************************ 00:14:41.396 15:10:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:41.396 15:10:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:41.396 15:10:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:41.396 15:10:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:41.396 ************************************ 00:14:41.396 START TEST nvmf_invalid 00:14:41.396 ************************************ 00:14:41.396 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:41.396 * Looking for test storage... 00:14:41.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:41.396 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:14:41.396 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1689 -- # lcov --version 00:14:41.396 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:14:41.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.656 --rc genhtml_branch_coverage=1 00:14:41.656 --rc genhtml_function_coverage=1 00:14:41.656 --rc genhtml_legend=1 00:14:41.656 --rc geninfo_all_blocks=1 00:14:41.656 --rc geninfo_unexecuted_blocks=1 00:14:41.656 00:14:41.656 ' 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:14:41.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.656 --rc genhtml_branch_coverage=1 00:14:41.656 --rc genhtml_function_coverage=1 00:14:41.656 --rc genhtml_legend=1 00:14:41.656 --rc geninfo_all_blocks=1 00:14:41.656 --rc geninfo_unexecuted_blocks=1 00:14:41.656 00:14:41.656 ' 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:14:41.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.656 --rc genhtml_branch_coverage=1 00:14:41.656 --rc genhtml_function_coverage=1 00:14:41.656 --rc genhtml_legend=1 00:14:41.656 --rc geninfo_all_blocks=1 00:14:41.656 --rc geninfo_unexecuted_blocks=1 00:14:41.656 00:14:41.656 ' 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:14:41.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.656 --rc genhtml_branch_coverage=1 00:14:41.656 --rc genhtml_function_coverage=1 00:14:41.656 --rc genhtml_legend=1 00:14:41.656 --rc geninfo_all_blocks=1 00:14:41.656 --rc geninfo_unexecuted_blocks=1 00:14:41.656 00:14:41.656 ' 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:41.656 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:41.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:41.657 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:41.657 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:41.657 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:41.657 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:41.657 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:41.657 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:41.657 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:41.657 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:41.657 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:41.657 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:41.657 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:41.657 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:41.657 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:41.657 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:41.657 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.657 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:41.657 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.657 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:41.657 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:41.657 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:14:41.657 15:10:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:44.194 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:44.194 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:14:44.194 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:44.194 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:44.194 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:44.194 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:44.194 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:44.195 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:44.195 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:44.195 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:44.452 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:44.452 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:44.452 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:44.452 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:44.452 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:44.452 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:44.452 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:44.452 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:44.453 Found net devices under 0000:84:00.0: cvl_0_0 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:44.453 Found net devices under 0000:84:00.1: cvl_0_1 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:44.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:44.453 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:14:44.453 00:14:44.453 --- 10.0.0.2 ping statistics --- 00:14:44.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.453 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:44.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:44.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:14:44.453 00:14:44.453 --- 10.0.0.1 ping statistics --- 00:14:44.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.453 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3136165 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3136165 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 3136165 ']' 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:44.453 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:44.453 [2024-10-28 15:10:31.300327] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:14:44.453 [2024-10-28 15:10:31.300422] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.710 [2024-10-28 15:10:31.436009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:44.710 [2024-10-28 15:10:31.556530] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.710 [2024-10-28 15:10:31.556641] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.710 [2024-10-28 15:10:31.556701] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:44.710 [2024-10-28 15:10:31.556732] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:44.710 [2024-10-28 15:10:31.556759] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.710 [2024-10-28 15:10:31.560374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.710 [2024-10-28 15:10:31.560479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:44.710 [2024-10-28 15:10:31.560576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:44.710 [2024-10-28 15:10:31.560579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.967 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:44.967 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:14:44.967 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:44.967 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:44.967 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:44.967 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:44.967 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:44.967 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode2681 00:14:45.223 [2024-10-28 15:10:32.024573] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:45.223 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:45.223 { 00:14:45.223 "nqn": "nqn.2016-06.io.spdk:cnode2681", 00:14:45.223 "tgt_name": "foobar", 00:14:45.223 "method": "nvmf_create_subsystem", 00:14:45.223 "req_id": 1 00:14:45.223 } 00:14:45.223 Got JSON-RPC error response 00:14:45.223 response: 00:14:45.223 { 00:14:45.223 "code": -32603, 00:14:45.223 "message": "Unable to find target foobar" 00:14:45.223 }' 00:14:45.223 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:45.223 { 00:14:45.223 "nqn": "nqn.2016-06.io.spdk:cnode2681", 00:14:45.223 "tgt_name": "foobar", 00:14:45.223 "method": "nvmf_create_subsystem", 00:14:45.223 "req_id": 1 00:14:45.223 } 00:14:45.223 Got JSON-RPC error response 00:14:45.223 response: 00:14:45.223 { 00:14:45.223 "code": -32603, 00:14:45.223 "message": "Unable to find target foobar" 00:14:45.223 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:45.223 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:45.223 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode32248 00:14:45.785 [2024-10-28 15:10:32.397791] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32248: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:45.785 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:45.785 { 00:14:45.785 "nqn": "nqn.2016-06.io.spdk:cnode32248", 00:14:45.785 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:45.785 "method": "nvmf_create_subsystem", 00:14:45.785 "req_id": 1 00:14:45.785 } 00:14:45.785 Got JSON-RPC error response 00:14:45.785 response: 00:14:45.785 { 00:14:45.785 "code": -32602, 00:14:45.785 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:45.785 }' 00:14:45.785 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:45.785 { 00:14:45.785 "nqn": "nqn.2016-06.io.spdk:cnode32248", 00:14:45.785 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:45.785 "method": "nvmf_create_subsystem", 00:14:45.785 "req_id": 1 00:14:45.785 } 00:14:45.785 Got JSON-RPC error response 00:14:45.785 response: 00:14:45.785 { 00:14:45.785 "code": -32602, 00:14:45.785 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:45.785 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:45.785 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:45.785 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode6196 00:14:46.043 [2024-10-28 15:10:32.722854] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6196: invalid model number 'SPDK_Controller' 00:14:46.043 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:46.043 { 00:14:46.043 "nqn": "nqn.2016-06.io.spdk:cnode6196", 00:14:46.043 "model_number": "SPDK_Controller\u001f", 00:14:46.043 "method": "nvmf_create_subsystem", 00:14:46.043 "req_id": 1 00:14:46.043 } 00:14:46.043 Got JSON-RPC error response 00:14:46.043 response: 00:14:46.043 { 00:14:46.043 "code": -32602, 00:14:46.043 "message": "Invalid MN SPDK_Controller\u001f" 00:14:46.043 }' 00:14:46.043 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:46.043 { 00:14:46.043 "nqn": "nqn.2016-06.io.spdk:cnode6196", 00:14:46.043 "model_number": "SPDK_Controller\u001f", 00:14:46.043 "method": "nvmf_create_subsystem", 00:14:46.043 "req_id": 1 00:14:46.043 } 00:14:46.043 Got JSON-RPC error response 00:14:46.043 response: 00:14:46.043 { 00:14:46.043 "code": -32602, 00:14:46.043 "message": "Invalid MN SPDK_Controller\u001f" 00:14:46.043 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:46.043 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:46.043 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:46.043 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:46.043 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:46.043 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:46.043 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:46.043 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.043 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:14:46.043 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:14:46.043 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:14:46.043 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.043 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.043 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:14:46.043 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:14:46.043 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:14:46.043 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 2 == \- ]] 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '2{tCI9,hE0%BD|q]2[;$a' 00:14:46.044 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '2{tCI9,hE0%BD|q]2[;$a' nqn.2016-06.io.spdk:cnode28103 00:14:46.302 [2024-10-28 15:10:33.144243] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28103: invalid serial number '2{tCI9,hE0%BD|q]2[;$a' 00:14:46.302 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:46.302 { 00:14:46.302 "nqn": "nqn.2016-06.io.spdk:cnode28103", 00:14:46.302 "serial_number": "2{tCI9,hE0%BD|q]2[;$a", 00:14:46.302 "method": "nvmf_create_subsystem", 00:14:46.302 "req_id": 1 00:14:46.302 } 00:14:46.302 Got JSON-RPC error response 00:14:46.302 response: 00:14:46.302 { 00:14:46.302 "code": -32602, 00:14:46.302 "message": "Invalid SN 2{tCI9,hE0%BD|q]2[;$a" 00:14:46.302 }' 00:14:46.302 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:46.302 { 00:14:46.302 "nqn": "nqn.2016-06.io.spdk:cnode28103", 00:14:46.302 "serial_number": "2{tCI9,hE0%BD|q]2[;$a", 00:14:46.302 "method": "nvmf_create_subsystem", 00:14:46.302 "req_id": 1 00:14:46.302 } 00:14:46.302 Got JSON-RPC error response 00:14:46.302 response: 00:14:46.302 { 00:14:46.302 "code": -32602, 00:14:46.302 "message": "Invalid SN 2{tCI9,hE0%BD|q]2[;$a" 00:14:46.302 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:46.561 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:14:46.562 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ < == \- ]] 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '%^Ss4IVA-/Om)yVs=[aO$h' 00:14:46.563 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '%^Ss4IVA-/Om)yVs=[aO$h' nqn.2016-06.io.spdk:cnode6476 00:14:46.822 [2024-10-28 15:10:33.637892] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6476: invalid model number '%^Ss4IVA-/Om)yVs=[aO$h' 00:14:46.822 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:46.822 { 00:14:46.822 "nqn": "nqn.2016-06.io.spdk:cnode6476", 00:14:46.822 "model_number": "%^Ss4IVA-/Om)yVs=[aO$h", 00:14:46.822 "method": "nvmf_create_subsystem", 00:14:46.822 "req_id": 1 00:14:46.822 } 00:14:46.822 Got JSON-RPC error response 00:14:46.822 response: 00:14:46.822 { 00:14:46.822 "code": -32602, 00:14:46.822 "message": "Invalid MN %^Ss4IVA-/Om)yVs=[aO$h" 00:14:46.822 }' 00:14:46.822 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:46.822 { 00:14:46.822 "nqn": "nqn.2016-06.io.spdk:cnode6476", 00:14:46.822 "model_number": "%^Ss4IVA-/Om)yVs=[aO$h", 00:14:46.822 "method": "nvmf_create_subsystem", 00:14:46.822 "req_id": 1 00:14:46.822 } 00:14:46.822 Got JSON-RPC error response 00:14:46.822 response: 00:14:46.822 { 00:14:46.822 "code": -32602, 00:14:46.822 "message": "Invalid MN %^Ss4IVA-/Om)yVs=[aO$h" 00:14:46.822 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:46.822 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:47.389 [2024-10-28 15:10:33.963050] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:47.389 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:47.647 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:47.647 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:47.647 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:47.647 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:47.647 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:47.905 [2024-10-28 15:10:34.725604] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:47.905 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:47.905 { 00:14:47.905 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:47.905 "listen_address": { 00:14:47.905 "trtype": "tcp", 00:14:47.905 "traddr": "", 00:14:47.905 "trsvcid": "4421" 00:14:47.905 }, 00:14:47.905 "method": "nvmf_subsystem_remove_listener", 00:14:47.905 "req_id": 1 00:14:47.905 } 00:14:47.905 Got JSON-RPC error response 00:14:47.905 response: 00:14:47.905 { 00:14:47.905 "code": -32602, 00:14:47.905 "message": "Invalid parameters" 00:14:47.905 }' 00:14:47.905 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:47.905 { 00:14:47.905 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:47.905 "listen_address": { 00:14:47.905 "trtype": "tcp", 00:14:47.905 "traddr": "", 00:14:47.905 "trsvcid": "4421" 00:14:47.905 }, 00:14:47.905 "method": "nvmf_subsystem_remove_listener", 00:14:47.905 "req_id": 1 00:14:47.905 } 00:14:47.905 Got JSON-RPC error response 00:14:47.905 response: 00:14:47.905 { 00:14:47.905 "code": -32602, 00:14:47.905 "message": "Invalid parameters" 00:14:47.905 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:47.905 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25945 -i 0 00:14:48.470 [2024-10-28 15:10:35.215194] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25945: invalid cntlid range [0-65519] 00:14:48.470 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:48.470 { 00:14:48.470 "nqn": "nqn.2016-06.io.spdk:cnode25945", 00:14:48.470 "min_cntlid": 0, 00:14:48.470 "method": "nvmf_create_subsystem", 00:14:48.470 "req_id": 1 00:14:48.470 } 00:14:48.470 Got JSON-RPC error response 00:14:48.470 response: 00:14:48.470 { 00:14:48.470 "code": -32602, 00:14:48.470 "message": "Invalid cntlid range [0-65519]" 00:14:48.470 }' 00:14:48.470 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:48.470 { 00:14:48.470 "nqn": "nqn.2016-06.io.spdk:cnode25945", 00:14:48.470 "min_cntlid": 0, 00:14:48.470 "method": "nvmf_create_subsystem", 00:14:48.470 "req_id": 1 00:14:48.470 } 00:14:48.470 Got JSON-RPC error response 00:14:48.470 response: 00:14:48.470 { 00:14:48.470 "code": -32602, 00:14:48.470 "message": "Invalid cntlid range [0-65519]" 00:14:48.470 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:48.470 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16427 -i 65520 00:14:49.036 [2024-10-28 15:10:35.728921] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16427: invalid cntlid range [65520-65519] 00:14:49.036 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:49.036 { 00:14:49.036 "nqn": "nqn.2016-06.io.spdk:cnode16427", 00:14:49.036 "min_cntlid": 65520, 00:14:49.036 "method": "nvmf_create_subsystem", 00:14:49.036 "req_id": 1 00:14:49.036 } 00:14:49.036 Got JSON-RPC error response 00:14:49.036 response: 00:14:49.036 { 00:14:49.036 "code": -32602, 00:14:49.036 "message": "Invalid cntlid range [65520-65519]" 00:14:49.036 }' 00:14:49.036 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:49.036 { 00:14:49.036 "nqn": "nqn.2016-06.io.spdk:cnode16427", 00:14:49.036 "min_cntlid": 65520, 00:14:49.036 "method": "nvmf_create_subsystem", 00:14:49.036 "req_id": 1 00:14:49.036 } 00:14:49.036 Got JSON-RPC error response 00:14:49.036 response: 00:14:49.036 { 00:14:49.036 "code": -32602, 00:14:49.036 "message": "Invalid cntlid range [65520-65519]" 00:14:49.036 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:49.036 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3514 -I 0 00:14:49.601 [2024-10-28 15:10:36.407178] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3514: invalid cntlid range [1-0] 00:14:49.601 15:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:49.601 { 00:14:49.601 "nqn": "nqn.2016-06.io.spdk:cnode3514", 00:14:49.601 "max_cntlid": 0, 00:14:49.601 "method": "nvmf_create_subsystem", 00:14:49.601 "req_id": 1 00:14:49.601 } 00:14:49.601 Got JSON-RPC error response 00:14:49.601 response: 00:14:49.601 { 00:14:49.601 "code": -32602, 00:14:49.601 "message": "Invalid cntlid range [1-0]" 00:14:49.601 }' 00:14:49.601 15:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:49.601 { 00:14:49.601 "nqn": "nqn.2016-06.io.spdk:cnode3514", 00:14:49.601 "max_cntlid": 0, 00:14:49.601 "method": "nvmf_create_subsystem", 00:14:49.601 "req_id": 1 00:14:49.601 } 00:14:49.601 Got JSON-RPC error response 00:14:49.601 response: 00:14:49.601 { 00:14:49.601 "code": -32602, 00:14:49.601 "message": "Invalid cntlid range [1-0]" 00:14:49.601 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:49.601 15:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21414 -I 65520 00:14:50.169 [2024-10-28 15:10:36.736302] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21414: invalid cntlid range [1-65520] 00:14:50.169 15:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:14:50.169 { 00:14:50.169 "nqn": "nqn.2016-06.io.spdk:cnode21414", 00:14:50.169 "max_cntlid": 65520, 00:14:50.169 "method": "nvmf_create_subsystem", 00:14:50.169 "req_id": 1 00:14:50.169 } 00:14:50.169 Got JSON-RPC error response 00:14:50.169 response: 00:14:50.169 { 00:14:50.169 "code": -32602, 00:14:50.169 "message": "Invalid cntlid range [1-65520]" 00:14:50.169 }' 00:14:50.169 15:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:14:50.169 { 00:14:50.169 "nqn": "nqn.2016-06.io.spdk:cnode21414", 00:14:50.169 "max_cntlid": 65520, 00:14:50.169 "method": "nvmf_create_subsystem", 00:14:50.169 "req_id": 1 00:14:50.169 } 00:14:50.169 Got JSON-RPC error response 00:14:50.169 response: 00:14:50.169 { 00:14:50.169 "code": -32602, 00:14:50.169 "message": "Invalid cntlid range [1-65520]" 00:14:50.169 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:50.169 15:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8717 -i 6 -I 5 00:14:50.733 [2024-10-28 15:10:37.382409] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8717: invalid cntlid range [6-5] 00:14:50.733 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:14:50.733 { 00:14:50.733 "nqn": "nqn.2016-06.io.spdk:cnode8717", 00:14:50.733 "min_cntlid": 6, 00:14:50.733 "max_cntlid": 5, 00:14:50.734 "method": "nvmf_create_subsystem", 00:14:50.734 "req_id": 1 00:14:50.734 } 00:14:50.734 Got JSON-RPC error response 00:14:50.734 response: 00:14:50.734 { 00:14:50.734 "code": -32602, 00:14:50.734 "message": "Invalid cntlid range [6-5]" 00:14:50.734 }' 00:14:50.734 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:14:50.734 { 00:14:50.734 "nqn": "nqn.2016-06.io.spdk:cnode8717", 00:14:50.734 "min_cntlid": 6, 00:14:50.734 "max_cntlid": 5, 00:14:50.734 "method": "nvmf_create_subsystem", 00:14:50.734 "req_id": 1 00:14:50.734 } 00:14:50.734 Got JSON-RPC error response 00:14:50.734 response: 00:14:50.734 { 00:14:50.734 "code": -32602, 00:14:50.734 "message": "Invalid cntlid range [6-5]" 00:14:50.734 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:50.734 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:50.993 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:50.993 { 00:14:50.993 "name": "foobar", 00:14:50.993 "method": "nvmf_delete_target", 00:14:50.993 "req_id": 1 00:14:50.993 } 00:14:50.993 Got JSON-RPC error response 00:14:50.993 response: 00:14:50.993 { 00:14:50.993 "code": -32602, 00:14:50.993 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:50.993 }' 00:14:50.993 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:50.993 { 00:14:50.993 "name": "foobar", 00:14:50.993 "method": "nvmf_delete_target", 00:14:50.993 "req_id": 1 00:14:50.993 } 00:14:50.993 Got JSON-RPC error response 00:14:50.993 response: 00:14:50.993 { 00:14:50.993 "code": -32602, 00:14:50.993 "message": "The specified target doesn't exist, cannot delete it." 00:14:50.993 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:50.993 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:50.993 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:50.993 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:50.993 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:14:50.993 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:50.993 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:14:50.993 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:50.993 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:50.993 rmmod nvme_tcp 00:14:50.993 rmmod nvme_fabrics 00:14:50.993 rmmod nvme_keyring 00:14:50.993 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:50.993 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:14:50.994 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:14:50.994 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3136165 ']' 00:14:50.994 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3136165 00:14:50.994 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 3136165 ']' 00:14:50.994 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 3136165 00:14:50.994 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:14:50.994 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:50.994 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3136165 00:14:50.994 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:50.994 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:50.994 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3136165' 00:14:50.994 killing process with pid 3136165 00:14:50.994 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 3136165 00:14:50.994 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 3136165 00:14:51.253 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:51.253 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:51.253 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:51.253 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:14:51.253 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:14:51.253 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:51.253 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:14:51.253 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:51.253 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:51.253 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.253 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:51.253 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:53.795 00:14:53.795 real 0m12.007s 00:14:53.795 user 0m31.930s 00:14:53.795 sys 0m3.488s 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:53.795 ************************************ 00:14:53.795 END TEST nvmf_invalid 00:14:53.795 ************************************ 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:53.795 ************************************ 00:14:53.795 START TEST nvmf_connect_stress 00:14:53.795 ************************************ 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:53.795 * Looking for test storage... 00:14:53.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1689 -- # lcov --version 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:14:53.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.795 --rc genhtml_branch_coverage=1 00:14:53.795 --rc genhtml_function_coverage=1 00:14:53.795 --rc genhtml_legend=1 00:14:53.795 --rc geninfo_all_blocks=1 00:14:53.795 --rc geninfo_unexecuted_blocks=1 00:14:53.795 00:14:53.795 ' 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:14:53.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.795 --rc genhtml_branch_coverage=1 00:14:53.795 --rc genhtml_function_coverage=1 00:14:53.795 --rc genhtml_legend=1 00:14:53.795 --rc geninfo_all_blocks=1 00:14:53.795 --rc geninfo_unexecuted_blocks=1 00:14:53.795 00:14:53.795 ' 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:14:53.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.795 --rc genhtml_branch_coverage=1 00:14:53.795 --rc genhtml_function_coverage=1 00:14:53.795 --rc genhtml_legend=1 00:14:53.795 --rc geninfo_all_blocks=1 00:14:53.795 --rc geninfo_unexecuted_blocks=1 00:14:53.795 00:14:53.795 ' 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:14:53.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.795 --rc genhtml_branch_coverage=1 00:14:53.795 --rc genhtml_function_coverage=1 00:14:53.795 --rc genhtml_legend=1 00:14:53.795 --rc geninfo_all_blocks=1 00:14:53.795 --rc geninfo_unexecuted_blocks=1 00:14:53.795 00:14:53.795 ' 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.795 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:53.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:14:53.796 15:10:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:57.089 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:57.089 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:57.089 Found net devices under 0000:84:00.0: cvl_0_0 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:57.089 Found net devices under 0000:84:00.1: cvl_0_1 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:57.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:57.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:14:57.089 00:14:57.089 --- 10.0.0.2 ping statistics --- 00:14:57.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.089 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:57.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:57.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:14:57.089 00:14:57.089 --- 10.0.0.1 ping statistics --- 00:14:57.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.089 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:57.089 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:14:57.090 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:57.090 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:57.090 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:57.090 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:57.090 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:57.090 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:57.090 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:57.090 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:57.090 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:57.090 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:57.090 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.090 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:57.090 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3139100 00:14:57.090 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3139100 00:14:57.090 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 3139100 ']' 00:14:57.090 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.090 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:57.090 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.090 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:57.090 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.090 [2024-10-28 15:10:43.611983] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:14:57.090 [2024-10-28 15:10:43.612088] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.090 [2024-10-28 15:10:43.757480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:57.090 [2024-10-28 15:10:43.880647] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.090 [2024-10-28 15:10:43.880773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.090 [2024-10-28 15:10:43.880811] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.090 [2024-10-28 15:10:43.880853] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.090 [2024-10-28 15:10:43.880866] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.090 [2024-10-28 15:10:43.883760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.090 [2024-10-28 15:10:43.883871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.090 [2024-10-28 15:10:43.883878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.348 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:57.348 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:14:57.348 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:57.348 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:57.348 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.348 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.348 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:57.348 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.348 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.348 [2024-10-28 15:10:44.085867] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.348 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.348 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.349 [2024-10-28 15:10:44.103761] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.349 NULL1 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3139236 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.349 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.915 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.915 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:14:57.915 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:57.915 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.915 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.172 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.172 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:14:58.172 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:58.172 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.172 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.428 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.428 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:14:58.428 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:58.429 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.429 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.686 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.686 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:14:58.686 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:58.686 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.686 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.943 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.943 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:14:58.943 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:58.943 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.943 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.507 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.507 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:14:59.507 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:59.507 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.507 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.765 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.765 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:14:59.765 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:59.765 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.765 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:00.021 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.021 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:15:00.021 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.021 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.021 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:00.278 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.278 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:15:00.278 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.278 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.278 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:00.566 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.566 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:15:00.566 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.566 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.566 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:00.850 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.850 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:15:00.850 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.850 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.850 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.415 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.415 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:15:01.415 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:01.415 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.415 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.673 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.673 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:15:01.673 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:01.673 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.673 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.931 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.931 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:15:01.931 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:01.931 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.931 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.189 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.189 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:15:02.189 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:02.189 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.189 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.446 15:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.447 15:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:15:02.447 15:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:02.447 15:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.447 15:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:03.011 15:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.011 15:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:15:03.011 15:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:03.011 15:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.011 15:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:03.269 15:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.269 15:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:15:03.269 15:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:03.269 15:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.269 15:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:03.540 15:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.540 15:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:15:03.540 15:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:03.540 15:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.540 15:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:03.798 15:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.798 15:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:15:03.798 15:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:03.798 15:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.798 15:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.056 15:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.056 15:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:15:04.056 15:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:04.056 15:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.056 15:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.621 15:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.621 15:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:15:04.621 15:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:04.621 15:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.621 15:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.879 15:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.879 15:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:15:04.879 15:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:04.879 15:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.879 15:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:05.137 15:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.137 15:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:15:05.137 15:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.137 15:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.137 15:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:05.395 15:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.395 15:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:15:05.395 15:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.395 15:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.395 15:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:05.960 15:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.960 15:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:15:05.960 15:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.960 15:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.960 15:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.218 15:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.218 15:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:15:06.218 15:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.218 15:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.218 15:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.476 15:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.476 15:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:15:06.476 15:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.476 15:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.476 15:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.734 15:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.734 15:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:15:06.734 15:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.734 15:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.734 15:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.992 15:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.992 15:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:15:06.992 15:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.992 15:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.992 15:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.558 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.558 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:15:07.558 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.558 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.558 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.558 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:07.816 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.816 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3139236 00:15:07.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3139236) - No such process 00:15:07.816 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3139236 00:15:07.816 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:07.816 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:07.816 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:07.816 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:07.816 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:15:07.816 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:07.816 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:15:07.816 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:07.816 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:07.816 rmmod nvme_tcp 00:15:07.816 rmmod nvme_fabrics 00:15:07.816 rmmod nvme_keyring 00:15:07.816 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:07.816 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:15:07.816 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:15:07.816 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3139100 ']' 00:15:07.817 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3139100 00:15:07.817 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 3139100 ']' 00:15:07.817 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 3139100 00:15:07.817 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:15:07.817 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:07.817 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3139100 00:15:07.817 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:07.817 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:07.817 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3139100' 00:15:07.817 killing process with pid 3139100 00:15:07.817 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 3139100 00:15:07.817 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 3139100 00:15:08.075 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:08.075 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:08.075 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:08.075 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:15:08.075 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:15:08.075 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:08.075 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:15:08.075 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:08.075 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:08.075 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.075 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:08.075 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.617 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:10.617 00:15:10.617 real 0m16.685s 00:15:10.617 user 0m38.954s 00:15:10.617 sys 0m7.073s 00:15:10.617 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:10.617 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:10.617 ************************************ 00:15:10.617 END TEST nvmf_connect_stress 00:15:10.617 ************************************ 00:15:10.617 15:10:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:10.617 15:10:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:10.617 15:10:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:10.617 15:10:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:10.617 ************************************ 00:15:10.617 START TEST nvmf_fused_ordering 00:15:10.617 ************************************ 00:15:10.617 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:10.617 * Looking for test storage... 00:15:10.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1689 -- # lcov --version 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:15:10.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.617 --rc genhtml_branch_coverage=1 00:15:10.617 --rc genhtml_function_coverage=1 00:15:10.617 --rc genhtml_legend=1 00:15:10.617 --rc geninfo_all_blocks=1 00:15:10.617 --rc geninfo_unexecuted_blocks=1 00:15:10.617 00:15:10.617 ' 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:15:10.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.617 --rc genhtml_branch_coverage=1 00:15:10.617 --rc genhtml_function_coverage=1 00:15:10.617 --rc genhtml_legend=1 00:15:10.617 --rc geninfo_all_blocks=1 00:15:10.617 --rc geninfo_unexecuted_blocks=1 00:15:10.617 00:15:10.617 ' 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:15:10.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.617 --rc genhtml_branch_coverage=1 00:15:10.617 --rc genhtml_function_coverage=1 00:15:10.617 --rc genhtml_legend=1 00:15:10.617 --rc geninfo_all_blocks=1 00:15:10.617 --rc geninfo_unexecuted_blocks=1 00:15:10.617 00:15:10.617 ' 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:15:10.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.617 --rc genhtml_branch_coverage=1 00:15:10.617 --rc genhtml_function_coverage=1 00:15:10.617 --rc genhtml_legend=1 00:15:10.617 --rc geninfo_all_blocks=1 00:15:10.617 --rc geninfo_unexecuted_blocks=1 00:15:10.617 00:15:10.617 ' 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.617 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:10.618 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.618 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:15:10.618 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:10.618 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:10.618 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:10.618 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:10.618 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:10.618 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:10.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:10.618 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:10.618 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:10.618 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:10.618 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:10.618 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:10.618 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:10.618 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:10.618 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:10.618 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:10.618 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.618 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:10.618 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.618 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:10.618 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:10.618 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:15:10.618 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:13.912 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:13.913 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:13.913 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:13.913 Found net devices under 0000:84:00.0: cvl_0_0 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:13.913 Found net devices under 0000:84:00.1: cvl_0_1 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:13.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:13.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:15:13.913 00:15:13.913 --- 10.0.0.2 ping statistics --- 00:15:13.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.913 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:13.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:13.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:15:13.913 00:15:13.913 --- 10.0.0.1 ping statistics --- 00:15:13.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.913 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3142521 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3142521 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 3142521 ']' 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:13.913 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:13.913 [2024-10-28 15:11:00.472348] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:15:13.913 [2024-10-28 15:11:00.472521] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.913 [2024-10-28 15:11:00.644622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.913 [2024-10-28 15:11:00.765500] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:13.913 [2024-10-28 15:11:00.765612] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:13.913 [2024-10-28 15:11:00.765678] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:13.913 [2024-10-28 15:11:00.765697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:13.913 [2024-10-28 15:11:00.765710] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:13.914 [2024-10-28 15:11:00.766603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.173 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:14.173 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:15:14.173 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:14.173 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:14.173 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.173 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.173 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:14.173 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.173 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.173 [2024-10-28 15:11:01.017254] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:14.173 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.173 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:14.173 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.173 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.173 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.173 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:14.173 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.173 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.173 [2024-10-28 15:11:01.037294] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:14.433 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.433 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:14.433 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.433 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.433 NULL1 00:15:14.433 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.433 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:14.433 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.433 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.433 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.433 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:14.433 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.433 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.433 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.433 15:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:14.433 [2024-10-28 15:11:01.093453] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:15:14.433 [2024-10-28 15:11:01.093505] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3142667 ] 00:15:15.372 Attached to nqn.2016-06.io.spdk:cnode1 00:15:15.372 Namespace ID: 1 size: 1GB 00:15:15.372 fused_ordering(0) 00:15:15.372 fused_ordering(1) 00:15:15.372 fused_ordering(2) 00:15:15.372 fused_ordering(3) 00:15:15.372 fused_ordering(4) 00:15:15.372 fused_ordering(5) 00:15:15.372 fused_ordering(6) 00:15:15.372 fused_ordering(7) 00:15:15.372 fused_ordering(8) 00:15:15.372 fused_ordering(9) 00:15:15.372 fused_ordering(10) 00:15:15.372 fused_ordering(11) 00:15:15.372 fused_ordering(12) 00:15:15.372 fused_ordering(13) 00:15:15.372 fused_ordering(14) 00:15:15.372 fused_ordering(15) 00:15:15.372 fused_ordering(16) 00:15:15.372 fused_ordering(17) 00:15:15.372 fused_ordering(18) 00:15:15.372 fused_ordering(19) 00:15:15.372 fused_ordering(20) 00:15:15.372 fused_ordering(21) 00:15:15.372 fused_ordering(22) 00:15:15.372 fused_ordering(23) 00:15:15.372 fused_ordering(24) 00:15:15.372 fused_ordering(25) 00:15:15.372 fused_ordering(26) 00:15:15.372 fused_ordering(27) 00:15:15.372 fused_ordering(28) 00:15:15.372 fused_ordering(29) 00:15:15.372 fused_ordering(30) 00:15:15.372 fused_ordering(31) 00:15:15.372 fused_ordering(32) 00:15:15.372 fused_ordering(33) 00:15:15.372 fused_ordering(34) 00:15:15.372 fused_ordering(35) 00:15:15.372 fused_ordering(36) 00:15:15.372 fused_ordering(37) 00:15:15.372 fused_ordering(38) 00:15:15.372 fused_ordering(39) 00:15:15.372 fused_ordering(40) 00:15:15.372 fused_ordering(41) 00:15:15.372 fused_ordering(42) 00:15:15.372 fused_ordering(43) 00:15:15.372 fused_ordering(44) 00:15:15.372 fused_ordering(45) 00:15:15.372 fused_ordering(46) 00:15:15.372 fused_ordering(47) 00:15:15.372 fused_ordering(48) 00:15:15.372 fused_ordering(49) 00:15:15.372 fused_ordering(50) 00:15:15.372 fused_ordering(51) 00:15:15.372 fused_ordering(52) 00:15:15.372 fused_ordering(53) 00:15:15.372 fused_ordering(54) 00:15:15.372 fused_ordering(55) 00:15:15.372 fused_ordering(56) 00:15:15.372 fused_ordering(57) 00:15:15.372 fused_ordering(58) 00:15:15.372 fused_ordering(59) 00:15:15.372 fused_ordering(60) 00:15:15.372 fused_ordering(61) 00:15:15.372 fused_ordering(62) 00:15:15.372 fused_ordering(63) 00:15:15.372 fused_ordering(64) 00:15:15.372 fused_ordering(65) 00:15:15.372 fused_ordering(66) 00:15:15.372 fused_ordering(67) 00:15:15.372 fused_ordering(68) 00:15:15.372 fused_ordering(69) 00:15:15.372 fused_ordering(70) 00:15:15.372 fused_ordering(71) 00:15:15.372 fused_ordering(72) 00:15:15.372 fused_ordering(73) 00:15:15.372 fused_ordering(74) 00:15:15.372 fused_ordering(75) 00:15:15.372 fused_ordering(76) 00:15:15.372 fused_ordering(77) 00:15:15.372 fused_ordering(78) 00:15:15.372 fused_ordering(79) 00:15:15.372 fused_ordering(80) 00:15:15.372 fused_ordering(81) 00:15:15.372 fused_ordering(82) 00:15:15.372 fused_ordering(83) 00:15:15.372 fused_ordering(84) 00:15:15.372 fused_ordering(85) 00:15:15.372 fused_ordering(86) 00:15:15.372 fused_ordering(87) 00:15:15.372 fused_ordering(88) 00:15:15.372 fused_ordering(89) 00:15:15.372 fused_ordering(90) 00:15:15.372 fused_ordering(91) 00:15:15.372 fused_ordering(92) 00:15:15.372 fused_ordering(93) 00:15:15.372 fused_ordering(94) 00:15:15.372 fused_ordering(95) 00:15:15.372 fused_ordering(96) 00:15:15.372 fused_ordering(97) 00:15:15.372 fused_ordering(98) 00:15:15.372 fused_ordering(99) 00:15:15.372 fused_ordering(100) 00:15:15.372 fused_ordering(101) 00:15:15.372 fused_ordering(102) 00:15:15.372 fused_ordering(103) 00:15:15.372 fused_ordering(104) 00:15:15.372 fused_ordering(105) 00:15:15.372 fused_ordering(106) 00:15:15.372 fused_ordering(107) 00:15:15.372 fused_ordering(108) 00:15:15.372 fused_ordering(109) 00:15:15.372 fused_ordering(110) 00:15:15.372 fused_ordering(111) 00:15:15.372 fused_ordering(112) 00:15:15.372 fused_ordering(113) 00:15:15.372 fused_ordering(114) 00:15:15.372 fused_ordering(115) 00:15:15.372 fused_ordering(116) 00:15:15.372 fused_ordering(117) 00:15:15.372 fused_ordering(118) 00:15:15.372 fused_ordering(119) 00:15:15.372 fused_ordering(120) 00:15:15.372 fused_ordering(121) 00:15:15.372 fused_ordering(122) 00:15:15.372 fused_ordering(123) 00:15:15.372 fused_ordering(124) 00:15:15.372 fused_ordering(125) 00:15:15.372 fused_ordering(126) 00:15:15.372 fused_ordering(127) 00:15:15.372 fused_ordering(128) 00:15:15.372 fused_ordering(129) 00:15:15.372 fused_ordering(130) 00:15:15.372 fused_ordering(131) 00:15:15.372 fused_ordering(132) 00:15:15.372 fused_ordering(133) 00:15:15.372 fused_ordering(134) 00:15:15.372 fused_ordering(135) 00:15:15.372 fused_ordering(136) 00:15:15.372 fused_ordering(137) 00:15:15.372 fused_ordering(138) 00:15:15.372 fused_ordering(139) 00:15:15.372 fused_ordering(140) 00:15:15.372 fused_ordering(141) 00:15:15.372 fused_ordering(142) 00:15:15.372 fused_ordering(143) 00:15:15.372 fused_ordering(144) 00:15:15.372 fused_ordering(145) 00:15:15.372 fused_ordering(146) 00:15:15.372 fused_ordering(147) 00:15:15.372 fused_ordering(148) 00:15:15.372 fused_ordering(149) 00:15:15.372 fused_ordering(150) 00:15:15.372 fused_ordering(151) 00:15:15.372 fused_ordering(152) 00:15:15.372 fused_ordering(153) 00:15:15.372 fused_ordering(154) 00:15:15.372 fused_ordering(155) 00:15:15.372 fused_ordering(156) 00:15:15.372 fused_ordering(157) 00:15:15.372 fused_ordering(158) 00:15:15.372 fused_ordering(159) 00:15:15.372 fused_ordering(160) 00:15:15.372 fused_ordering(161) 00:15:15.372 fused_ordering(162) 00:15:15.372 fused_ordering(163) 00:15:15.372 fused_ordering(164) 00:15:15.372 fused_ordering(165) 00:15:15.372 fused_ordering(166) 00:15:15.372 fused_ordering(167) 00:15:15.372 fused_ordering(168) 00:15:15.372 fused_ordering(169) 00:15:15.372 fused_ordering(170) 00:15:15.372 fused_ordering(171) 00:15:15.372 fused_ordering(172) 00:15:15.372 fused_ordering(173) 00:15:15.372 fused_ordering(174) 00:15:15.372 fused_ordering(175) 00:15:15.372 fused_ordering(176) 00:15:15.372 fused_ordering(177) 00:15:15.372 fused_ordering(178) 00:15:15.372 fused_ordering(179) 00:15:15.372 fused_ordering(180) 00:15:15.372 fused_ordering(181) 00:15:15.372 fused_ordering(182) 00:15:15.372 fused_ordering(183) 00:15:15.372 fused_ordering(184) 00:15:15.372 fused_ordering(185) 00:15:15.372 fused_ordering(186) 00:15:15.372 fused_ordering(187) 00:15:15.372 fused_ordering(188) 00:15:15.372 fused_ordering(189) 00:15:15.372 fused_ordering(190) 00:15:15.372 fused_ordering(191) 00:15:15.372 fused_ordering(192) 00:15:15.372 fused_ordering(193) 00:15:15.372 fused_ordering(194) 00:15:15.372 fused_ordering(195) 00:15:15.372 fused_ordering(196) 00:15:15.372 fused_ordering(197) 00:15:15.372 fused_ordering(198) 00:15:15.372 fused_ordering(199) 00:15:15.372 fused_ordering(200) 00:15:15.372 fused_ordering(201) 00:15:15.372 fused_ordering(202) 00:15:15.372 fused_ordering(203) 00:15:15.372 fused_ordering(204) 00:15:15.372 fused_ordering(205) 00:15:15.943 fused_ordering(206) 00:15:15.943 fused_ordering(207) 00:15:15.943 fused_ordering(208) 00:15:15.943 fused_ordering(209) 00:15:15.943 fused_ordering(210) 00:15:15.943 fused_ordering(211) 00:15:15.943 fused_ordering(212) 00:15:15.943 fused_ordering(213) 00:15:15.943 fused_ordering(214) 00:15:15.943 fused_ordering(215) 00:15:15.943 fused_ordering(216) 00:15:15.943 fused_ordering(217) 00:15:15.943 fused_ordering(218) 00:15:15.943 fused_ordering(219) 00:15:15.943 fused_ordering(220) 00:15:15.943 fused_ordering(221) 00:15:15.943 fused_ordering(222) 00:15:15.943 fused_ordering(223) 00:15:15.943 fused_ordering(224) 00:15:15.943 fused_ordering(225) 00:15:15.943 fused_ordering(226) 00:15:15.943 fused_ordering(227) 00:15:15.943 fused_ordering(228) 00:15:15.943 fused_ordering(229) 00:15:15.943 fused_ordering(230) 00:15:15.943 fused_ordering(231) 00:15:15.943 fused_ordering(232) 00:15:15.943 fused_ordering(233) 00:15:15.943 fused_ordering(234) 00:15:15.943 fused_ordering(235) 00:15:15.943 fused_ordering(236) 00:15:15.943 fused_ordering(237) 00:15:15.943 fused_ordering(238) 00:15:15.943 fused_ordering(239) 00:15:15.943 fused_ordering(240) 00:15:15.943 fused_ordering(241) 00:15:15.943 fused_ordering(242) 00:15:15.943 fused_ordering(243) 00:15:15.943 fused_ordering(244) 00:15:15.943 fused_ordering(245) 00:15:15.943 fused_ordering(246) 00:15:15.943 fused_ordering(247) 00:15:15.943 fused_ordering(248) 00:15:15.943 fused_ordering(249) 00:15:15.943 fused_ordering(250) 00:15:15.943 fused_ordering(251) 00:15:15.943 fused_ordering(252) 00:15:15.943 fused_ordering(253) 00:15:15.943 fused_ordering(254) 00:15:15.943 fused_ordering(255) 00:15:15.943 fused_ordering(256) 00:15:15.943 fused_ordering(257) 00:15:15.943 fused_ordering(258) 00:15:15.943 fused_ordering(259) 00:15:15.943 fused_ordering(260) 00:15:15.943 fused_ordering(261) 00:15:15.943 fused_ordering(262) 00:15:15.943 fused_ordering(263) 00:15:15.943 fused_ordering(264) 00:15:15.943 fused_ordering(265) 00:15:15.943 fused_ordering(266) 00:15:15.943 fused_ordering(267) 00:15:15.943 fused_ordering(268) 00:15:15.943 fused_ordering(269) 00:15:15.943 fused_ordering(270) 00:15:15.943 fused_ordering(271) 00:15:15.943 fused_ordering(272) 00:15:15.943 fused_ordering(273) 00:15:15.943 fused_ordering(274) 00:15:15.943 fused_ordering(275) 00:15:15.943 fused_ordering(276) 00:15:15.943 fused_ordering(277) 00:15:15.943 fused_ordering(278) 00:15:15.943 fused_ordering(279) 00:15:15.943 fused_ordering(280) 00:15:15.943 fused_ordering(281) 00:15:15.943 fused_ordering(282) 00:15:15.943 fused_ordering(283) 00:15:15.943 fused_ordering(284) 00:15:15.943 fused_ordering(285) 00:15:15.943 fused_ordering(286) 00:15:15.943 fused_ordering(287) 00:15:15.943 fused_ordering(288) 00:15:15.943 fused_ordering(289) 00:15:15.943 fused_ordering(290) 00:15:15.943 fused_ordering(291) 00:15:15.943 fused_ordering(292) 00:15:15.943 fused_ordering(293) 00:15:15.943 fused_ordering(294) 00:15:15.943 fused_ordering(295) 00:15:15.943 fused_ordering(296) 00:15:15.943 fused_ordering(297) 00:15:15.943 fused_ordering(298) 00:15:15.943 fused_ordering(299) 00:15:15.943 fused_ordering(300) 00:15:15.943 fused_ordering(301) 00:15:15.943 fused_ordering(302) 00:15:15.943 fused_ordering(303) 00:15:15.943 fused_ordering(304) 00:15:15.943 fused_ordering(305) 00:15:15.943 fused_ordering(306) 00:15:15.943 fused_ordering(307) 00:15:15.943 fused_ordering(308) 00:15:15.943 fused_ordering(309) 00:15:15.943 fused_ordering(310) 00:15:15.943 fused_ordering(311) 00:15:15.943 fused_ordering(312) 00:15:15.943 fused_ordering(313) 00:15:15.943 fused_ordering(314) 00:15:15.943 fused_ordering(315) 00:15:15.943 fused_ordering(316) 00:15:15.943 fused_ordering(317) 00:15:15.943 fused_ordering(318) 00:15:15.943 fused_ordering(319) 00:15:15.943 fused_ordering(320) 00:15:15.943 fused_ordering(321) 00:15:15.943 fused_ordering(322) 00:15:15.943 fused_ordering(323) 00:15:15.943 fused_ordering(324) 00:15:15.943 fused_ordering(325) 00:15:15.943 fused_ordering(326) 00:15:15.943 fused_ordering(327) 00:15:15.943 fused_ordering(328) 00:15:15.943 fused_ordering(329) 00:15:15.943 fused_ordering(330) 00:15:15.943 fused_ordering(331) 00:15:15.943 fused_ordering(332) 00:15:15.943 fused_ordering(333) 00:15:15.943 fused_ordering(334) 00:15:15.943 fused_ordering(335) 00:15:15.943 fused_ordering(336) 00:15:15.943 fused_ordering(337) 00:15:15.944 fused_ordering(338) 00:15:15.944 fused_ordering(339) 00:15:15.944 fused_ordering(340) 00:15:15.944 fused_ordering(341) 00:15:15.944 fused_ordering(342) 00:15:15.944 fused_ordering(343) 00:15:15.944 fused_ordering(344) 00:15:15.944 fused_ordering(345) 00:15:15.944 fused_ordering(346) 00:15:15.944 fused_ordering(347) 00:15:15.944 fused_ordering(348) 00:15:15.944 fused_ordering(349) 00:15:15.944 fused_ordering(350) 00:15:15.944 fused_ordering(351) 00:15:15.944 fused_ordering(352) 00:15:15.944 fused_ordering(353) 00:15:15.944 fused_ordering(354) 00:15:15.944 fused_ordering(355) 00:15:15.944 fused_ordering(356) 00:15:15.944 fused_ordering(357) 00:15:15.944 fused_ordering(358) 00:15:15.944 fused_ordering(359) 00:15:15.944 fused_ordering(360) 00:15:15.944 fused_ordering(361) 00:15:15.944 fused_ordering(362) 00:15:15.944 fused_ordering(363) 00:15:15.944 fused_ordering(364) 00:15:15.944 fused_ordering(365) 00:15:15.944 fused_ordering(366) 00:15:15.944 fused_ordering(367) 00:15:15.944 fused_ordering(368) 00:15:15.944 fused_ordering(369) 00:15:15.944 fused_ordering(370) 00:15:15.944 fused_ordering(371) 00:15:15.944 fused_ordering(372) 00:15:15.944 fused_ordering(373) 00:15:15.944 fused_ordering(374) 00:15:15.944 fused_ordering(375) 00:15:15.944 fused_ordering(376) 00:15:15.944 fused_ordering(377) 00:15:15.944 fused_ordering(378) 00:15:15.944 fused_ordering(379) 00:15:15.944 fused_ordering(380) 00:15:15.944 fused_ordering(381) 00:15:15.944 fused_ordering(382) 00:15:15.944 fused_ordering(383) 00:15:15.944 fused_ordering(384) 00:15:15.944 fused_ordering(385) 00:15:15.944 fused_ordering(386) 00:15:15.944 fused_ordering(387) 00:15:15.944 fused_ordering(388) 00:15:15.944 fused_ordering(389) 00:15:15.944 fused_ordering(390) 00:15:15.944 fused_ordering(391) 00:15:15.944 fused_ordering(392) 00:15:15.944 fused_ordering(393) 00:15:15.944 fused_ordering(394) 00:15:15.944 fused_ordering(395) 00:15:15.944 fused_ordering(396) 00:15:15.944 fused_ordering(397) 00:15:15.944 fused_ordering(398) 00:15:15.944 fused_ordering(399) 00:15:15.944 fused_ordering(400) 00:15:15.944 fused_ordering(401) 00:15:15.944 fused_ordering(402) 00:15:15.944 fused_ordering(403) 00:15:15.944 fused_ordering(404) 00:15:15.944 fused_ordering(405) 00:15:15.944 fused_ordering(406) 00:15:15.944 fused_ordering(407) 00:15:15.944 fused_ordering(408) 00:15:15.944 fused_ordering(409) 00:15:15.944 fused_ordering(410) 00:15:16.885 fused_ordering(411) 00:15:16.885 fused_ordering(412) 00:15:16.885 fused_ordering(413) 00:15:16.885 fused_ordering(414) 00:15:16.885 fused_ordering(415) 00:15:16.885 fused_ordering(416) 00:15:16.885 fused_ordering(417) 00:15:16.885 fused_ordering(418) 00:15:16.885 fused_ordering(419) 00:15:16.885 fused_ordering(420) 00:15:16.885 fused_ordering(421) 00:15:16.885 fused_ordering(422) 00:15:16.885 fused_ordering(423) 00:15:16.885 fused_ordering(424) 00:15:16.885 fused_ordering(425) 00:15:16.885 fused_ordering(426) 00:15:16.885 fused_ordering(427) 00:15:16.885 fused_ordering(428) 00:15:16.885 fused_ordering(429) 00:15:16.885 fused_ordering(430) 00:15:16.885 fused_ordering(431) 00:15:16.885 fused_ordering(432) 00:15:16.885 fused_ordering(433) 00:15:16.885 fused_ordering(434) 00:15:16.885 fused_ordering(435) 00:15:16.885 fused_ordering(436) 00:15:16.885 fused_ordering(437) 00:15:16.885 fused_ordering(438) 00:15:16.885 fused_ordering(439) 00:15:16.885 fused_ordering(440) 00:15:16.885 fused_ordering(441) 00:15:16.885 fused_ordering(442) 00:15:16.885 fused_ordering(443) 00:15:16.885 fused_ordering(444) 00:15:16.885 fused_ordering(445) 00:15:16.885 fused_ordering(446) 00:15:16.885 fused_ordering(447) 00:15:16.885 fused_ordering(448) 00:15:16.885 fused_ordering(449) 00:15:16.885 fused_ordering(450) 00:15:16.885 fused_ordering(451) 00:15:16.885 fused_ordering(452) 00:15:16.885 fused_ordering(453) 00:15:16.885 fused_ordering(454) 00:15:16.885 fused_ordering(455) 00:15:16.885 fused_ordering(456) 00:15:16.885 fused_ordering(457) 00:15:16.885 fused_ordering(458) 00:15:16.885 fused_ordering(459) 00:15:16.885 fused_ordering(460) 00:15:16.885 fused_ordering(461) 00:15:16.885 fused_ordering(462) 00:15:16.885 fused_ordering(463) 00:15:16.885 fused_ordering(464) 00:15:16.885 fused_ordering(465) 00:15:16.885 fused_ordering(466) 00:15:16.885 fused_ordering(467) 00:15:16.885 fused_ordering(468) 00:15:16.885 fused_ordering(469) 00:15:16.885 fused_ordering(470) 00:15:16.885 fused_ordering(471) 00:15:16.885 fused_ordering(472) 00:15:16.885 fused_ordering(473) 00:15:16.885 fused_ordering(474) 00:15:16.885 fused_ordering(475) 00:15:16.885 fused_ordering(476) 00:15:16.885 fused_ordering(477) 00:15:16.885 fused_ordering(478) 00:15:16.885 fused_ordering(479) 00:15:16.885 fused_ordering(480) 00:15:16.885 fused_ordering(481) 00:15:16.885 fused_ordering(482) 00:15:16.885 fused_ordering(483) 00:15:16.885 fused_ordering(484) 00:15:16.885 fused_ordering(485) 00:15:16.885 fused_ordering(486) 00:15:16.885 fused_ordering(487) 00:15:16.885 fused_ordering(488) 00:15:16.885 fused_ordering(489) 00:15:16.885 fused_ordering(490) 00:15:16.885 fused_ordering(491) 00:15:16.885 fused_ordering(492) 00:15:16.885 fused_ordering(493) 00:15:16.885 fused_ordering(494) 00:15:16.885 fused_ordering(495) 00:15:16.885 fused_ordering(496) 00:15:16.885 fused_ordering(497) 00:15:16.885 fused_ordering(498) 00:15:16.885 fused_ordering(499) 00:15:16.885 fused_ordering(500) 00:15:16.885 fused_ordering(501) 00:15:16.885 fused_ordering(502) 00:15:16.885 fused_ordering(503) 00:15:16.885 fused_ordering(504) 00:15:16.885 fused_ordering(505) 00:15:16.885 fused_ordering(506) 00:15:16.885 fused_ordering(507) 00:15:16.885 fused_ordering(508) 00:15:16.885 fused_ordering(509) 00:15:16.885 fused_ordering(510) 00:15:16.885 fused_ordering(511) 00:15:16.885 fused_ordering(512) 00:15:16.885 fused_ordering(513) 00:15:16.885 fused_ordering(514) 00:15:16.885 fused_ordering(515) 00:15:16.885 fused_ordering(516) 00:15:16.885 fused_ordering(517) 00:15:16.885 fused_ordering(518) 00:15:16.885 fused_ordering(519) 00:15:16.885 fused_ordering(520) 00:15:16.885 fused_ordering(521) 00:15:16.885 fused_ordering(522) 00:15:16.885 fused_ordering(523) 00:15:16.885 fused_ordering(524) 00:15:16.885 fused_ordering(525) 00:15:16.885 fused_ordering(526) 00:15:16.885 fused_ordering(527) 00:15:16.885 fused_ordering(528) 00:15:16.885 fused_ordering(529) 00:15:16.885 fused_ordering(530) 00:15:16.885 fused_ordering(531) 00:15:16.885 fused_ordering(532) 00:15:16.885 fused_ordering(533) 00:15:16.885 fused_ordering(534) 00:15:16.885 fused_ordering(535) 00:15:16.885 fused_ordering(536) 00:15:16.885 fused_ordering(537) 00:15:16.885 fused_ordering(538) 00:15:16.885 fused_ordering(539) 00:15:16.885 fused_ordering(540) 00:15:16.885 fused_ordering(541) 00:15:16.885 fused_ordering(542) 00:15:16.885 fused_ordering(543) 00:15:16.885 fused_ordering(544) 00:15:16.885 fused_ordering(545) 00:15:16.885 fused_ordering(546) 00:15:16.885 fused_ordering(547) 00:15:16.885 fused_ordering(548) 00:15:16.885 fused_ordering(549) 00:15:16.885 fused_ordering(550) 00:15:16.885 fused_ordering(551) 00:15:16.885 fused_ordering(552) 00:15:16.885 fused_ordering(553) 00:15:16.885 fused_ordering(554) 00:15:16.885 fused_ordering(555) 00:15:16.885 fused_ordering(556) 00:15:16.885 fused_ordering(557) 00:15:16.885 fused_ordering(558) 00:15:16.885 fused_ordering(559) 00:15:16.885 fused_ordering(560) 00:15:16.885 fused_ordering(561) 00:15:16.885 fused_ordering(562) 00:15:16.885 fused_ordering(563) 00:15:16.885 fused_ordering(564) 00:15:16.885 fused_ordering(565) 00:15:16.885 fused_ordering(566) 00:15:16.885 fused_ordering(567) 00:15:16.885 fused_ordering(568) 00:15:16.885 fused_ordering(569) 00:15:16.885 fused_ordering(570) 00:15:16.885 fused_ordering(571) 00:15:16.885 fused_ordering(572) 00:15:16.885 fused_ordering(573) 00:15:16.885 fused_ordering(574) 00:15:16.885 fused_ordering(575) 00:15:16.885 fused_ordering(576) 00:15:16.885 fused_ordering(577) 00:15:16.885 fused_ordering(578) 00:15:16.885 fused_ordering(579) 00:15:16.885 fused_ordering(580) 00:15:16.885 fused_ordering(581) 00:15:16.885 fused_ordering(582) 00:15:16.885 fused_ordering(583) 00:15:16.885 fused_ordering(584) 00:15:16.885 fused_ordering(585) 00:15:16.885 fused_ordering(586) 00:15:16.885 fused_ordering(587) 00:15:16.885 fused_ordering(588) 00:15:16.885 fused_ordering(589) 00:15:16.885 fused_ordering(590) 00:15:16.885 fused_ordering(591) 00:15:16.885 fused_ordering(592) 00:15:16.885 fused_ordering(593) 00:15:16.885 fused_ordering(594) 00:15:16.885 fused_ordering(595) 00:15:16.885 fused_ordering(596) 00:15:16.885 fused_ordering(597) 00:15:16.885 fused_ordering(598) 00:15:16.885 fused_ordering(599) 00:15:16.885 fused_ordering(600) 00:15:16.885 fused_ordering(601) 00:15:16.885 fused_ordering(602) 00:15:16.885 fused_ordering(603) 00:15:16.885 fused_ordering(604) 00:15:16.885 fused_ordering(605) 00:15:16.885 fused_ordering(606) 00:15:16.885 fused_ordering(607) 00:15:16.885 fused_ordering(608) 00:15:16.885 fused_ordering(609) 00:15:16.885 fused_ordering(610) 00:15:16.885 fused_ordering(611) 00:15:16.885 fused_ordering(612) 00:15:16.885 fused_ordering(613) 00:15:16.885 fused_ordering(614) 00:15:16.885 fused_ordering(615) 00:15:18.269 fused_ordering(616) 00:15:18.269 fused_ordering(617) 00:15:18.269 fused_ordering(618) 00:15:18.269 fused_ordering(619) 00:15:18.269 fused_ordering(620) 00:15:18.269 fused_ordering(621) 00:15:18.269 fused_ordering(622) 00:15:18.269 fused_ordering(623) 00:15:18.269 fused_ordering(624) 00:15:18.269 fused_ordering(625) 00:15:18.269 fused_ordering(626) 00:15:18.269 fused_ordering(627) 00:15:18.269 fused_ordering(628) 00:15:18.269 fused_ordering(629) 00:15:18.269 fused_ordering(630) 00:15:18.269 fused_ordering(631) 00:15:18.269 fused_ordering(632) 00:15:18.269 fused_ordering(633) 00:15:18.269 fused_ordering(634) 00:15:18.269 fused_ordering(635) 00:15:18.269 fused_ordering(636) 00:15:18.269 fused_ordering(637) 00:15:18.269 fused_ordering(638) 00:15:18.269 fused_ordering(639) 00:15:18.269 fused_ordering(640) 00:15:18.269 fused_ordering(641) 00:15:18.269 fused_ordering(642) 00:15:18.269 fused_ordering(643) 00:15:18.269 fused_ordering(644) 00:15:18.269 fused_ordering(645) 00:15:18.269 fused_ordering(646) 00:15:18.269 fused_ordering(647) 00:15:18.269 fused_ordering(648) 00:15:18.269 fused_ordering(649) 00:15:18.269 fused_ordering(650) 00:15:18.269 fused_ordering(651) 00:15:18.269 fused_ordering(652) 00:15:18.269 fused_ordering(653) 00:15:18.269 fused_ordering(654) 00:15:18.269 fused_ordering(655) 00:15:18.269 fused_ordering(656) 00:15:18.269 fused_ordering(657) 00:15:18.269 fused_ordering(658) 00:15:18.269 fused_ordering(659) 00:15:18.269 fused_ordering(660) 00:15:18.269 fused_ordering(661) 00:15:18.269 fused_ordering(662) 00:15:18.269 fused_ordering(663) 00:15:18.269 fused_ordering(664) 00:15:18.269 fused_ordering(665) 00:15:18.269 fused_ordering(666) 00:15:18.269 fused_ordering(667) 00:15:18.269 fused_ordering(668) 00:15:18.269 fused_ordering(669) 00:15:18.269 fused_ordering(670) 00:15:18.269 fused_ordering(671) 00:15:18.269 fused_ordering(672) 00:15:18.269 fused_ordering(673) 00:15:18.269 fused_ordering(674) 00:15:18.269 fused_ordering(675) 00:15:18.269 fused_ordering(676) 00:15:18.269 fused_ordering(677) 00:15:18.269 fused_ordering(678) 00:15:18.269 fused_ordering(679) 00:15:18.269 fused_ordering(680) 00:15:18.269 fused_ordering(681) 00:15:18.269 fused_ordering(682) 00:15:18.269 fused_ordering(683) 00:15:18.269 fused_ordering(684) 00:15:18.269 fused_ordering(685) 00:15:18.269 fused_ordering(686) 00:15:18.269 fused_ordering(687) 00:15:18.269 fused_ordering(688) 00:15:18.269 fused_ordering(689) 00:15:18.269 fused_ordering(690) 00:15:18.269 fused_ordering(691) 00:15:18.269 fused_ordering(692) 00:15:18.269 fused_ordering(693) 00:15:18.269 fused_ordering(694) 00:15:18.269 fused_ordering(695) 00:15:18.269 fused_ordering(696) 00:15:18.269 fused_ordering(697) 00:15:18.269 fused_ordering(698) 00:15:18.269 fused_ordering(699) 00:15:18.269 fused_ordering(700) 00:15:18.269 fused_ordering(701) 00:15:18.269 fused_ordering(702) 00:15:18.269 fused_ordering(703) 00:15:18.269 fused_ordering(704) 00:15:18.269 fused_ordering(705) 00:15:18.269 fused_ordering(706) 00:15:18.269 fused_ordering(707) 00:15:18.269 fused_ordering(708) 00:15:18.269 fused_ordering(709) 00:15:18.269 fused_ordering(710) 00:15:18.269 fused_ordering(711) 00:15:18.269 fused_ordering(712) 00:15:18.269 fused_ordering(713) 00:15:18.269 fused_ordering(714) 00:15:18.269 fused_ordering(715) 00:15:18.269 fused_ordering(716) 00:15:18.269 fused_ordering(717) 00:15:18.269 fused_ordering(718) 00:15:18.269 fused_ordering(719) 00:15:18.269 fused_ordering(720) 00:15:18.269 fused_ordering(721) 00:15:18.269 fused_ordering(722) 00:15:18.269 fused_ordering(723) 00:15:18.269 fused_ordering(724) 00:15:18.269 fused_ordering(725) 00:15:18.269 fused_ordering(726) 00:15:18.269 fused_ordering(727) 00:15:18.269 fused_ordering(728) 00:15:18.269 fused_ordering(729) 00:15:18.269 fused_ordering(730) 00:15:18.269 fused_ordering(731) 00:15:18.269 fused_ordering(732) 00:15:18.269 fused_ordering(733) 00:15:18.269 fused_ordering(734) 00:15:18.269 fused_ordering(735) 00:15:18.269 fused_ordering(736) 00:15:18.269 fused_ordering(737) 00:15:18.269 fused_ordering(738) 00:15:18.269 fused_ordering(739) 00:15:18.269 fused_ordering(740) 00:15:18.269 fused_ordering(741) 00:15:18.269 fused_ordering(742) 00:15:18.269 fused_ordering(743) 00:15:18.269 fused_ordering(744) 00:15:18.269 fused_ordering(745) 00:15:18.269 fused_ordering(746) 00:15:18.269 fused_ordering(747) 00:15:18.269 fused_ordering(748) 00:15:18.269 fused_ordering(749) 00:15:18.269 fused_ordering(750) 00:15:18.269 fused_ordering(751) 00:15:18.269 fused_ordering(752) 00:15:18.269 fused_ordering(753) 00:15:18.269 fused_ordering(754) 00:15:18.269 fused_ordering(755) 00:15:18.269 fused_ordering(756) 00:15:18.269 fused_ordering(757) 00:15:18.269 fused_ordering(758) 00:15:18.269 fused_ordering(759) 00:15:18.269 fused_ordering(760) 00:15:18.269 fused_ordering(761) 00:15:18.269 fused_ordering(762) 00:15:18.269 fused_ordering(763) 00:15:18.269 fused_ordering(764) 00:15:18.269 fused_ordering(765) 00:15:18.269 fused_ordering(766) 00:15:18.269 fused_ordering(767) 00:15:18.269 fused_ordering(768) 00:15:18.269 fused_ordering(769) 00:15:18.269 fused_ordering(770) 00:15:18.269 fused_ordering(771) 00:15:18.269 fused_ordering(772) 00:15:18.269 fused_ordering(773) 00:15:18.270 fused_ordering(774) 00:15:18.270 fused_ordering(775) 00:15:18.270 fused_ordering(776) 00:15:18.270 fused_ordering(777) 00:15:18.270 fused_ordering(778) 00:15:18.270 fused_ordering(779) 00:15:18.270 fused_ordering(780) 00:15:18.270 fused_ordering(781) 00:15:18.270 fused_ordering(782) 00:15:18.270 fused_ordering(783) 00:15:18.270 fused_ordering(784) 00:15:18.270 fused_ordering(785) 00:15:18.270 fused_ordering(786) 00:15:18.270 fused_ordering(787) 00:15:18.270 fused_ordering(788) 00:15:18.270 fused_ordering(789) 00:15:18.270 fused_ordering(790) 00:15:18.270 fused_ordering(791) 00:15:18.270 fused_ordering(792) 00:15:18.270 fused_ordering(793) 00:15:18.270 fused_ordering(794) 00:15:18.270 fused_ordering(795) 00:15:18.270 fused_ordering(796) 00:15:18.270 fused_ordering(797) 00:15:18.270 fused_ordering(798) 00:15:18.270 fused_ordering(799) 00:15:18.270 fused_ordering(800) 00:15:18.270 fused_ordering(801) 00:15:18.270 fused_ordering(802) 00:15:18.270 fused_ordering(803) 00:15:18.270 fused_ordering(804) 00:15:18.270 fused_ordering(805) 00:15:18.270 fused_ordering(806) 00:15:18.270 fused_ordering(807) 00:15:18.270 fused_ordering(808) 00:15:18.270 fused_ordering(809) 00:15:18.270 fused_ordering(810) 00:15:18.270 fused_ordering(811) 00:15:18.270 fused_ordering(812) 00:15:18.270 fused_ordering(813) 00:15:18.270 fused_ordering(814) 00:15:18.270 fused_ordering(815) 00:15:18.270 fused_ordering(816) 00:15:18.270 fused_ordering(817) 00:15:18.270 fused_ordering(818) 00:15:18.270 fused_ordering(819) 00:15:18.270 fused_ordering(820) 00:15:19.653 fused_ordering(821) 00:15:19.653 fused_ordering(822) 00:15:19.653 fused_ordering(823) 00:15:19.653 fused_ordering(824) 00:15:19.653 fused_ordering(825) 00:15:19.653 fused_ordering(826) 00:15:19.653 fused_ordering(827) 00:15:19.653 fused_ordering(828) 00:15:19.653 fused_ordering(829) 00:15:19.653 fused_ordering(830) 00:15:19.653 fused_ordering(831) 00:15:19.653 fused_ordering(832) 00:15:19.653 fused_ordering(833) 00:15:19.653 fused_ordering(834) 00:15:19.653 fused_ordering(835) 00:15:19.653 fused_ordering(836) 00:15:19.653 fused_ordering(837) 00:15:19.653 fused_ordering(838) 00:15:19.653 fused_ordering(839) 00:15:19.653 fused_ordering(840) 00:15:19.653 fused_ordering(841) 00:15:19.653 fused_ordering(842) 00:15:19.653 fused_ordering(843) 00:15:19.653 fused_ordering(844) 00:15:19.653 fused_ordering(845) 00:15:19.653 fused_ordering(846) 00:15:19.653 fused_ordering(847) 00:15:19.653 fused_ordering(848) 00:15:19.653 fused_ordering(849) 00:15:19.653 fused_ordering(850) 00:15:19.653 fused_ordering(851) 00:15:19.653 fused_ordering(852) 00:15:19.653 fused_ordering(853) 00:15:19.653 fused_ordering(854) 00:15:19.653 fused_ordering(855) 00:15:19.653 fused_ordering(856) 00:15:19.653 fused_ordering(857) 00:15:19.653 fused_ordering(858) 00:15:19.653 fused_ordering(859) 00:15:19.653 fused_ordering(860) 00:15:19.653 fused_ordering(861) 00:15:19.653 fused_ordering(862) 00:15:19.653 fused_ordering(863) 00:15:19.653 fused_ordering(864) 00:15:19.653 fused_ordering(865) 00:15:19.653 fused_ordering(866) 00:15:19.653 fused_ordering(867) 00:15:19.653 fused_ordering(868) 00:15:19.653 fused_ordering(869) 00:15:19.653 fused_ordering(870) 00:15:19.653 fused_ordering(871) 00:15:19.653 fused_ordering(872) 00:15:19.653 fused_ordering(873) 00:15:19.653 fused_ordering(874) 00:15:19.653 fused_ordering(875) 00:15:19.653 fused_ordering(876) 00:15:19.653 fused_ordering(877) 00:15:19.653 fused_ordering(878) 00:15:19.653 fused_ordering(879) 00:15:19.653 fused_ordering(880) 00:15:19.653 fused_ordering(881) 00:15:19.653 fused_ordering(882) 00:15:19.653 fused_ordering(883) 00:15:19.653 fused_ordering(884) 00:15:19.653 fused_ordering(885) 00:15:19.653 fused_ordering(886) 00:15:19.653 fused_ordering(887) 00:15:19.653 fused_ordering(888) 00:15:19.653 fused_ordering(889) 00:15:19.653 fused_ordering(890) 00:15:19.653 fused_ordering(891) 00:15:19.653 fused_ordering(892) 00:15:19.653 fused_ordering(893) 00:15:19.653 fused_ordering(894) 00:15:19.653 fused_ordering(895) 00:15:19.653 fused_ordering(896) 00:15:19.653 fused_ordering(897) 00:15:19.653 fused_ordering(898) 00:15:19.653 fused_ordering(899) 00:15:19.653 fused_ordering(900) 00:15:19.653 fused_ordering(901) 00:15:19.653 fused_ordering(902) 00:15:19.653 fused_ordering(903) 00:15:19.653 fused_ordering(904) 00:15:19.653 fused_ordering(905) 00:15:19.653 fused_ordering(906) 00:15:19.653 fused_ordering(907) 00:15:19.653 fused_ordering(908) 00:15:19.653 fused_ordering(909) 00:15:19.653 fused_ordering(910) 00:15:19.653 fused_ordering(911) 00:15:19.653 fused_ordering(912) 00:15:19.653 fused_ordering(913) 00:15:19.653 fused_ordering(914) 00:15:19.653 fused_ordering(915) 00:15:19.653 fused_ordering(916) 00:15:19.653 fused_ordering(917) 00:15:19.653 fused_ordering(918) 00:15:19.653 fused_ordering(919) 00:15:19.653 fused_ordering(920) 00:15:19.653 fused_ordering(921) 00:15:19.653 fused_ordering(922) 00:15:19.653 fused_ordering(923) 00:15:19.653 fused_ordering(924) 00:15:19.653 fused_ordering(925) 00:15:19.653 fused_ordering(926) 00:15:19.653 fused_ordering(927) 00:15:19.653 fused_ordering(928) 00:15:19.653 fused_ordering(929) 00:15:19.653 fused_ordering(930) 00:15:19.653 fused_ordering(931) 00:15:19.653 fused_ordering(932) 00:15:19.653 fused_ordering(933) 00:15:19.653 fused_ordering(934) 00:15:19.653 fused_ordering(935) 00:15:19.653 fused_ordering(936) 00:15:19.653 fused_ordering(937) 00:15:19.653 fused_ordering(938) 00:15:19.653 fused_ordering(939) 00:15:19.653 fused_ordering(940) 00:15:19.653 fused_ordering(941) 00:15:19.653 fused_ordering(942) 00:15:19.653 fused_ordering(943) 00:15:19.653 fused_ordering(944) 00:15:19.653 fused_ordering(945) 00:15:19.653 fused_ordering(946) 00:15:19.653 fused_ordering(947) 00:15:19.653 fused_ordering(948) 00:15:19.653 fused_ordering(949) 00:15:19.653 fused_ordering(950) 00:15:19.654 fused_ordering(951) 00:15:19.654 fused_ordering(952) 00:15:19.654 fused_ordering(953) 00:15:19.654 fused_ordering(954) 00:15:19.654 fused_ordering(955) 00:15:19.654 fused_ordering(956) 00:15:19.654 fused_ordering(957) 00:15:19.654 fused_ordering(958) 00:15:19.654 fused_ordering(959) 00:15:19.654 fused_ordering(960) 00:15:19.654 fused_ordering(961) 00:15:19.654 fused_ordering(962) 00:15:19.654 fused_ordering(963) 00:15:19.654 fused_ordering(964) 00:15:19.654 fused_ordering(965) 00:15:19.654 fused_ordering(966) 00:15:19.654 fused_ordering(967) 00:15:19.654 fused_ordering(968) 00:15:19.654 fused_ordering(969) 00:15:19.654 fused_ordering(970) 00:15:19.654 fused_ordering(971) 00:15:19.654 fused_ordering(972) 00:15:19.654 fused_ordering(973) 00:15:19.654 fused_ordering(974) 00:15:19.654 fused_ordering(975) 00:15:19.654 fused_ordering(976) 00:15:19.654 fused_ordering(977) 00:15:19.654 fused_ordering(978) 00:15:19.654 fused_ordering(979) 00:15:19.654 fused_ordering(980) 00:15:19.654 fused_ordering(981) 00:15:19.654 fused_ordering(982) 00:15:19.654 fused_ordering(983) 00:15:19.654 fused_ordering(984) 00:15:19.654 fused_ordering(985) 00:15:19.654 fused_ordering(986) 00:15:19.654 fused_ordering(987) 00:15:19.654 fused_ordering(988) 00:15:19.654 fused_ordering(989) 00:15:19.654 fused_ordering(990) 00:15:19.654 fused_ordering(991) 00:15:19.654 fused_ordering(992) 00:15:19.654 fused_ordering(993) 00:15:19.654 fused_ordering(994) 00:15:19.654 fused_ordering(995) 00:15:19.654 fused_ordering(996) 00:15:19.654 fused_ordering(997) 00:15:19.654 fused_ordering(998) 00:15:19.654 fused_ordering(999) 00:15:19.654 fused_ordering(1000) 00:15:19.654 fused_ordering(1001) 00:15:19.654 fused_ordering(1002) 00:15:19.654 fused_ordering(1003) 00:15:19.654 fused_ordering(1004) 00:15:19.654 fused_ordering(1005) 00:15:19.654 fused_ordering(1006) 00:15:19.654 fused_ordering(1007) 00:15:19.654 fused_ordering(1008) 00:15:19.654 fused_ordering(1009) 00:15:19.654 fused_ordering(1010) 00:15:19.654 fused_ordering(1011) 00:15:19.654 fused_ordering(1012) 00:15:19.654 fused_ordering(1013) 00:15:19.654 fused_ordering(1014) 00:15:19.654 fused_ordering(1015) 00:15:19.654 fused_ordering(1016) 00:15:19.654 fused_ordering(1017) 00:15:19.654 fused_ordering(1018) 00:15:19.654 fused_ordering(1019) 00:15:19.654 fused_ordering(1020) 00:15:19.654 fused_ordering(1021) 00:15:19.654 fused_ordering(1022) 00:15:19.654 fused_ordering(1023) 00:15:19.913 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:19.913 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:19.913 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:19.913 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:15:19.913 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:19.913 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:15:19.913 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:19.913 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:19.913 rmmod nvme_tcp 00:15:19.913 rmmod nvme_fabrics 00:15:19.913 rmmod nvme_keyring 00:15:19.913 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:19.913 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:15:19.913 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:15:19.913 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3142521 ']' 00:15:19.913 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3142521 00:15:19.913 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 3142521 ']' 00:15:19.913 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 3142521 00:15:19.913 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:15:19.913 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:19.913 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3142521 00:15:19.913 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:19.913 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:19.913 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3142521' 00:15:19.913 killing process with pid 3142521 00:15:19.913 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 3142521 00:15:19.913 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 3142521 00:15:20.484 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:20.484 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:20.485 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:20.485 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:15:20.485 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:15:20.485 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:20.485 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:15:20.485 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:20.485 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:20.485 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.485 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:20.485 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:22.394 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:22.394 00:15:22.394 real 0m12.106s 00:15:22.394 user 0m10.110s 00:15:22.394 sys 0m6.046s 00:15:22.394 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:22.394 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:22.394 ************************************ 00:15:22.394 END TEST nvmf_fused_ordering 00:15:22.394 ************************************ 00:15:22.394 15:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:22.394 15:11:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:22.394 15:11:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:22.394 15:11:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:22.394 ************************************ 00:15:22.394 START TEST nvmf_ns_masking 00:15:22.394 ************************************ 00:15:22.394 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:22.394 * Looking for test storage... 00:15:22.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:22.394 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:15:22.394 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1689 -- # lcov --version 00:15:22.394 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:15:22.653 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:15:22.653 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:22.653 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:22.653 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:22.653 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:15:22.653 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:15:22.653 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:15:22.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.654 --rc genhtml_branch_coverage=1 00:15:22.654 --rc genhtml_function_coverage=1 00:15:22.654 --rc genhtml_legend=1 00:15:22.654 --rc geninfo_all_blocks=1 00:15:22.654 --rc geninfo_unexecuted_blocks=1 00:15:22.654 00:15:22.654 ' 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:15:22.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.654 --rc genhtml_branch_coverage=1 00:15:22.654 --rc genhtml_function_coverage=1 00:15:22.654 --rc genhtml_legend=1 00:15:22.654 --rc geninfo_all_blocks=1 00:15:22.654 --rc geninfo_unexecuted_blocks=1 00:15:22.654 00:15:22.654 ' 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:15:22.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.654 --rc genhtml_branch_coverage=1 00:15:22.654 --rc genhtml_function_coverage=1 00:15:22.654 --rc genhtml_legend=1 00:15:22.654 --rc geninfo_all_blocks=1 00:15:22.654 --rc geninfo_unexecuted_blocks=1 00:15:22.654 00:15:22.654 ' 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:15:22.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.654 --rc genhtml_branch_coverage=1 00:15:22.654 --rc genhtml_function_coverage=1 00:15:22.654 --rc genhtml_legend=1 00:15:22.654 --rc geninfo_all_blocks=1 00:15:22.654 --rc geninfo_unexecuted_blocks=1 00:15:22.654 00:15:22.654 ' 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:22.654 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=8ece3d58-6d11-45f9-aa5b-d4a87653d3fa 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=67467a77-ce4a-4cb6-a42d-7a93e0aef642 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=26c24722-a476-4b78-afcc-9d8cbf646f7a 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:22.654 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:22.655 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:22.655 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:22.655 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:22.655 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:22.655 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:22.655 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:22.655 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:22.655 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:15:22.655 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:25.950 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:25.950 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:15:25.950 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:25.950 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:25.950 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:25.950 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:25.950 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:25.950 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:15:25.950 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:25.950 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:15:25.950 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:15:25.950 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:15:25.950 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:15:25.950 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:15:25.950 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:15:25.950 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:25.951 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:25.951 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:25.951 Found net devices under 0000:84:00.0: cvl_0_0 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:25.951 Found net devices under 0000:84:00.1: cvl_0_1 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:25.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:25.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:15:25.951 00:15:25.951 --- 10.0.0.2 ping statistics --- 00:15:25.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.951 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:25.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:25.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:15:25.951 00:15:25.951 --- 10.0.0.1 ping statistics --- 00:15:25.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.951 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3145286 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3145286 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3145286 ']' 00:15:25.951 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.952 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:25.952 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.952 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:25.952 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:25.952 [2024-10-28 15:11:12.471094] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:15:25.952 [2024-10-28 15:11:12.471199] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.952 [2024-10-28 15:11:12.603019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.952 [2024-10-28 15:11:12.704976] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.952 [2024-10-28 15:11:12.705067] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.952 [2024-10-28 15:11:12.705105] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.952 [2024-10-28 15:11:12.705136] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.952 [2024-10-28 15:11:12.705173] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.952 [2024-10-28 15:11:12.705839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.211 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:26.211 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:15:26.211 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:26.211 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:26.211 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:26.211 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:26.211 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:26.471 [2024-10-28 15:11:13.245889] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:26.471 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:26.471 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:26.471 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:27.040 Malloc1 00:15:27.040 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:27.610 Malloc2 00:15:27.610 15:11:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:28.180 15:11:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:29.122 15:11:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:29.383 [2024-10-28 15:11:16.216741] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:29.644 15:11:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:29.644 15:11:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 26c24722-a476-4b78-afcc-9d8cbf646f7a -a 10.0.0.2 -s 4420 -i 4 00:15:29.644 15:11:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:29.644 15:11:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:29.644 15:11:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:29.644 15:11:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:29.644 15:11:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:32.186 15:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:32.186 15:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:32.186 15:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:32.186 15:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:32.186 15:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:32.186 15:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:32.186 15:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:32.186 15:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:32.186 15:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:32.186 15:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:32.186 15:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:32.186 15:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:32.186 15:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:32.186 [ 0]:0x1 00:15:32.186 15:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:32.186 15:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:32.186 15:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=abd664514a63435a86d8fb2394d4d659 00:15:32.186 15:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ abd664514a63435a86d8fb2394d4d659 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:32.186 15:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:32.186 15:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:32.186 15:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:32.186 15:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:32.186 [ 0]:0x1 00:15:32.186 15:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:32.186 15:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:32.186 15:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=abd664514a63435a86d8fb2394d4d659 00:15:32.186 15:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ abd664514a63435a86d8fb2394d4d659 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:32.186 15:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:32.186 15:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:32.186 15:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:32.186 [ 1]:0x2 00:15:32.186 15:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:32.186 15:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:32.446 15:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8e6727b568ae4b4ebf27b04309733439 00:15:32.446 15:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8e6727b568ae4b4ebf27b04309733439 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:32.446 15:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:32.446 15:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:32.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.446 15:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:33.014 15:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:33.274 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:33.274 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 26c24722-a476-4b78-afcc-9d8cbf646f7a -a 10.0.0.2 -s 4420 -i 4 00:15:33.535 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:33.535 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:33.535 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:33.535 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:15:33.535 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:15:33.535 15:11:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:35.447 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:35.447 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:35.447 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:35.447 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:35.447 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:35.447 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:35.447 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:35.447 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:35.707 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:35.707 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:35.707 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:35.707 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:35.707 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:35.707 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:35.707 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:35.707 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:35.707 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:35.707 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:35.707 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:35.707 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:35.707 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:35.707 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:35.707 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:35.707 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:35.707 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:35.707 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:35.708 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:35.708 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:35.708 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:35.708 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:35.708 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:35.708 [ 0]:0x2 00:15:35.708 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:35.708 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:35.708 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8e6727b568ae4b4ebf27b04309733439 00:15:35.708 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8e6727b568ae4b4ebf27b04309733439 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:35.708 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:36.300 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:36.300 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:36.300 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:36.300 [ 0]:0x1 00:15:36.300 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:36.300 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:36.597 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=abd664514a63435a86d8fb2394d4d659 00:15:36.597 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ abd664514a63435a86d8fb2394d4d659 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:36.597 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:36.597 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:36.597 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:36.597 [ 1]:0x2 00:15:36.597 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:36.597 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:36.597 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8e6727b568ae4b4ebf27b04309733439 00:15:36.597 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8e6727b568ae4b4ebf27b04309733439 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:36.597 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:36.868 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:36.868 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:36.868 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:36.869 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:36.869 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:36.869 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:36.869 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:36.869 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:36.869 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:36.869 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:36.869 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:36.869 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:36.869 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:36.869 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:36.869 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:36.869 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:36.869 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:36.869 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:36.869 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:36.869 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:36.869 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:36.869 [ 0]:0x2 00:15:36.869 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:36.869 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:37.129 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8e6727b568ae4b4ebf27b04309733439 00:15:37.130 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8e6727b568ae4b4ebf27b04309733439 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:37.130 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:37.130 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:37.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.130 15:11:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:37.700 15:11:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:37.700 15:11:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 26c24722-a476-4b78-afcc-9d8cbf646f7a -a 10.0.0.2 -s 4420 -i 4 00:15:37.700 15:11:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:37.700 15:11:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:37.700 15:11:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:37.700 15:11:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:37.700 15:11:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:37.700 15:11:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:39.610 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:39.610 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:39.610 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:39.871 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:39.871 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:39.871 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:39.871 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:39.871 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:39.871 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:39.871 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:39.871 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:39.871 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:39.871 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:39.871 [ 0]:0x1 00:15:39.871 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:39.871 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:40.131 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=abd664514a63435a86d8fb2394d4d659 00:15:40.131 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ abd664514a63435a86d8fb2394d4d659 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:40.131 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:40.131 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:40.131 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:40.131 [ 1]:0x2 00:15:40.131 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:40.131 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:40.131 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8e6727b568ae4b4ebf27b04309733439 00:15:40.131 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8e6727b568ae4b4ebf27b04309733439 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:40.131 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:40.390 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:40.390 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:40.390 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:40.390 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:40.390 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:40.390 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:40.390 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:40.390 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:40.390 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:40.390 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:40.390 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:40.390 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:40.650 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:40.650 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:40.650 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:40.650 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:40.650 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:40.650 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:40.650 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:40.650 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:40.650 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:40.650 [ 0]:0x2 00:15:40.650 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:40.650 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:40.650 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8e6727b568ae4b4ebf27b04309733439 00:15:40.650 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8e6727b568ae4b4ebf27b04309733439 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:40.650 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:40.650 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:40.650 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:40.650 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:40.650 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:40.650 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:40.650 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:40.650 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:40.650 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:40.650 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:40.650 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:40.650 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:41.221 [2024-10-28 15:11:27.899855] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:41.221 request: 00:15:41.221 { 00:15:41.221 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.221 "nsid": 2, 00:15:41.221 "host": "nqn.2016-06.io.spdk:host1", 00:15:41.221 "method": "nvmf_ns_remove_host", 00:15:41.221 "req_id": 1 00:15:41.221 } 00:15:41.221 Got JSON-RPC error response 00:15:41.221 response: 00:15:41.221 { 00:15:41.221 "code": -32602, 00:15:41.221 "message": "Invalid parameters" 00:15:41.221 } 00:15:41.221 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:41.221 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:41.221 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:41.221 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:41.221 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:41.221 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:41.221 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:41.221 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:41.221 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:41.221 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:41.221 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:41.221 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:41.221 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:41.221 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:41.221 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:41.221 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:41.221 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:41.221 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:41.221 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:41.221 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:41.221 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:41.221 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:41.221 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:41.221 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:41.221 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:41.221 [ 0]:0x2 00:15:41.221 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:41.221 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:41.481 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8e6727b568ae4b4ebf27b04309733439 00:15:41.481 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8e6727b568ae4b4ebf27b04309733439 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:41.481 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:41.481 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:41.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.481 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3147294 00:15:41.481 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:41.481 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:41.481 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3147294 /var/tmp/host.sock 00:15:41.481 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3147294 ']' 00:15:41.481 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:15:41.481 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:41.481 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:41.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:41.481 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:41.481 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:41.481 [2024-10-28 15:11:28.344852] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:15:41.481 [2024-10-28 15:11:28.345023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3147294 ] 00:15:41.741 [2024-10-28 15:11:28.501064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.001 [2024-10-28 15:11:28.621262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.570 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:42.570 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:15:42.570 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:43.508 15:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:43.767 15:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 8ece3d58-6d11-45f9-aa5b-d4a87653d3fa 00:15:43.767 15:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:43.767 15:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 8ECE3D586D1145F9AA5BD4A87653D3FA -i 00:15:44.027 15:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 67467a77-ce4a-4cb6-a42d-7a93e0aef642 00:15:44.027 15:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:44.027 15:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 67467A77CE4A4CB6A42D7A93E0AEF642 -i 00:15:44.287 15:11:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:44.857 15:11:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:45.797 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:45.797 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:46.367 nvme0n1 00:15:46.367 15:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:46.367 15:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:46.936 nvme1n2 00:15:46.936 15:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:46.936 15:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:46.936 15:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:46.936 15:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:46.936 15:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:47.506 15:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:47.506 15:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:47.506 15:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:47.506 15:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:48.077 15:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 8ece3d58-6d11-45f9-aa5b-d4a87653d3fa == \8\e\c\e\3\d\5\8\-\6\d\1\1\-\4\5\f\9\-\a\a\5\b\-\d\4\a\8\7\6\5\3\d\3\f\a ]] 00:15:48.077 15:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:48.077 15:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:48.077 15:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:48.647 15:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 67467a77-ce4a-4cb6-a42d-7a93e0aef642 == \6\7\4\6\7\a\7\7\-\c\e\4\a\-\4\c\b\6\-\a\4\2\d\-\7\a\9\3\e\0\a\e\f\6\4\2 ]] 00:15:48.647 15:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:48.907 15:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:49.477 15:11:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 8ece3d58-6d11-45f9-aa5b-d4a87653d3fa 00:15:49.477 15:11:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:49.477 15:11:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8ECE3D586D1145F9AA5BD4A87653D3FA 00:15:49.477 15:11:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:49.477 15:11:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8ECE3D586D1145F9AA5BD4A87653D3FA 00:15:49.477 15:11:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:49.477 15:11:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:49.477 15:11:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:49.477 15:11:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:49.477 15:11:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:49.477 15:11:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:49.477 15:11:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:49.477 15:11:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:49.477 15:11:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8ECE3D586D1145F9AA5BD4A87653D3FA 00:15:49.737 [2024-10-28 15:11:36.598370] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:15:49.737 [2024-10-28 15:11:36.598465] subsystem.c:2151:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:15:49.737 [2024-10-28 15:11:36.598507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.996 request: 00:15:49.996 { 00:15:49.996 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:49.996 "namespace": { 00:15:49.996 "bdev_name": "invalid", 00:15:49.996 "nsid": 1, 00:15:49.996 "nguid": "8ECE3D586D1145F9AA5BD4A87653D3FA", 00:15:49.996 "no_auto_visible": false 00:15:49.996 }, 00:15:49.996 "method": "nvmf_subsystem_add_ns", 00:15:49.996 "req_id": 1 00:15:49.996 } 00:15:49.996 Got JSON-RPC error response 00:15:49.996 response: 00:15:49.996 { 00:15:49.996 "code": -32602, 00:15:49.996 "message": "Invalid parameters" 00:15:49.996 } 00:15:49.996 15:11:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:49.996 15:11:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:49.996 15:11:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:49.996 15:11:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:49.996 15:11:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 8ece3d58-6d11-45f9-aa5b-d4a87653d3fa 00:15:49.996 15:11:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:49.996 15:11:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 8ECE3D586D1145F9AA5BD4A87653D3FA -i 00:15:50.565 15:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:15:52.473 15:11:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:15:52.473 15:11:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:15:52.473 15:11:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:53.042 15:11:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:15:53.042 15:11:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3147294 00:15:53.042 15:11:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3147294 ']' 00:15:53.042 15:11:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3147294 00:15:53.042 15:11:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:15:53.042 15:11:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:53.042 15:11:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3147294 00:15:53.042 15:11:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:53.042 15:11:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:53.042 15:11:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3147294' 00:15:53.042 killing process with pid 3147294 00:15:53.042 15:11:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3147294 00:15:53.042 15:11:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3147294 00:15:53.613 15:11:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:54.185 15:11:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:15:54.185 15:11:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:15:54.185 15:11:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:54.185 15:11:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:15:54.186 15:11:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:54.186 15:11:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:15:54.186 15:11:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:54.186 15:11:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:54.186 rmmod nvme_tcp 00:15:54.186 rmmod nvme_fabrics 00:15:54.186 rmmod nvme_keyring 00:15:54.445 15:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:54.445 15:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:15:54.445 15:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:15:54.445 15:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3145286 ']' 00:15:54.445 15:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3145286 00:15:54.445 15:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3145286 ']' 00:15:54.445 15:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3145286 00:15:54.445 15:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:15:54.445 15:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:54.445 15:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3145286 00:15:54.445 15:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:54.445 15:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:54.445 15:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3145286' 00:15:54.445 killing process with pid 3145286 00:15:54.445 15:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3145286 00:15:54.445 15:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3145286 00:15:54.704 15:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:54.704 15:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:54.704 15:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:54.704 15:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:15:54.704 15:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:15:54.704 15:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:54.704 15:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:15:54.704 15:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:54.704 15:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:54.704 15:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.704 15:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:54.704 15:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:57.244 00:15:57.244 real 0m34.458s 00:15:57.244 user 0m55.966s 00:15:57.244 sys 0m6.955s 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:57.244 ************************************ 00:15:57.244 END TEST nvmf_ns_masking 00:15:57.244 ************************************ 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:57.244 ************************************ 00:15:57.244 START TEST nvmf_nvme_cli 00:15:57.244 ************************************ 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:57.244 * Looking for test storage... 00:15:57.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1689 -- # lcov --version 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:15:57.244 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:15:57.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.245 --rc genhtml_branch_coverage=1 00:15:57.245 --rc genhtml_function_coverage=1 00:15:57.245 --rc genhtml_legend=1 00:15:57.245 --rc geninfo_all_blocks=1 00:15:57.245 --rc geninfo_unexecuted_blocks=1 00:15:57.245 00:15:57.245 ' 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:15:57.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.245 --rc genhtml_branch_coverage=1 00:15:57.245 --rc genhtml_function_coverage=1 00:15:57.245 --rc genhtml_legend=1 00:15:57.245 --rc geninfo_all_blocks=1 00:15:57.245 --rc geninfo_unexecuted_blocks=1 00:15:57.245 00:15:57.245 ' 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:15:57.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.245 --rc genhtml_branch_coverage=1 00:15:57.245 --rc genhtml_function_coverage=1 00:15:57.245 --rc genhtml_legend=1 00:15:57.245 --rc geninfo_all_blocks=1 00:15:57.245 --rc geninfo_unexecuted_blocks=1 00:15:57.245 00:15:57.245 ' 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:15:57.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.245 --rc genhtml_branch_coverage=1 00:15:57.245 --rc genhtml_function_coverage=1 00:15:57.245 --rc genhtml_legend=1 00:15:57.245 --rc geninfo_all_blocks=1 00:15:57.245 --rc geninfo_unexecuted_blocks=1 00:15:57.245 00:15:57.245 ' 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:57.245 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:57.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:57.246 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:57.246 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:57.246 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:57.246 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:57.246 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:57.246 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:57.246 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:57.246 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:57.246 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:57.246 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:57.246 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:57.246 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:57.246 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.246 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:57.246 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.246 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:57.246 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:57.246 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:15:57.246 15:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:59.792 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:59.792 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:59.792 Found net devices under 0000:84:00.0: cvl_0_0 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:59.792 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:59.793 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:59.793 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:59.793 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:59.793 Found net devices under 0000:84:00.1: cvl_0_1 00:15:59.793 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:59.793 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:59.793 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:15:59.793 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:59.793 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:59.793 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:59.793 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:59.793 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:59.793 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:59.793 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:59.793 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:59.793 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:59.793 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:59.793 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:59.793 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:59.793 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:59.793 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:59.793 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:59.793 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:59.793 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:59.793 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:00.052 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:00.052 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:00.052 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:00.052 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:00.052 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:00.052 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:00.052 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:00.052 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:00.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:00.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:16:00.052 00:16:00.052 --- 10.0.0.2 ping statistics --- 00:16:00.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.052 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:16:00.052 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:00.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:00.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:16:00.052 00:16:00.052 --- 10.0.0.1 ping statistics --- 00:16:00.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.052 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:16:00.052 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:00.052 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:16:00.052 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:00.052 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:00.052 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:00.052 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:00.052 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:00.052 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:00.052 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:00.052 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:00.052 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:00.052 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:00.052 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:00.052 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3150757 00:16:00.052 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:00.052 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3150757 00:16:00.052 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 3150757 ']' 00:16:00.052 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.052 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:00.052 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.052 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:00.052 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:00.052 [2024-10-28 15:11:46.875757] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:16:00.052 [2024-10-28 15:11:46.875851] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.312 [2024-10-28 15:11:47.064543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:00.571 [2024-10-28 15:11:47.232234] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.571 [2024-10-28 15:11:47.232368] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.571 [2024-10-28 15:11:47.232444] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:00.571 [2024-10-28 15:11:47.232507] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:00.571 [2024-10-28 15:11:47.232560] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.571 [2024-10-28 15:11:47.237331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.571 [2024-10-28 15:11:47.237441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:00.571 [2024-10-28 15:11:47.237559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:00.571 [2024-10-28 15:11:47.237569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.829 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:00.829 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:16:00.829 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:00.829 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:00.829 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:00.829 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:00.829 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:00.829 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.829 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:00.829 [2024-10-28 15:11:47.504659] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:00.829 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.829 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:00.829 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.829 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:00.829 Malloc0 00:16:00.829 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.829 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:00.829 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.830 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:00.830 Malloc1 00:16:00.830 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.830 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:00.830 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.830 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:00.830 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.830 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:00.830 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.830 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:00.830 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.830 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:00.830 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.830 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:00.830 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.830 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:00.830 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.830 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:00.830 [2024-10-28 15:11:47.609377] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:00.830 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.830 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:00.830 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.830 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:00.830 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.830 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:16:01.088 00:16:01.088 Discovery Log Number of Records 2, Generation counter 2 00:16:01.088 =====Discovery Log Entry 0====== 00:16:01.088 trtype: tcp 00:16:01.088 adrfam: ipv4 00:16:01.088 subtype: current discovery subsystem 00:16:01.088 treq: not required 00:16:01.088 portid: 0 00:16:01.088 trsvcid: 4420 00:16:01.088 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:01.088 traddr: 10.0.0.2 00:16:01.088 eflags: explicit discovery connections, duplicate discovery information 00:16:01.088 sectype: none 00:16:01.088 =====Discovery Log Entry 1====== 00:16:01.088 trtype: tcp 00:16:01.088 adrfam: ipv4 00:16:01.088 subtype: nvme subsystem 00:16:01.088 treq: not required 00:16:01.088 portid: 0 00:16:01.088 trsvcid: 4420 00:16:01.088 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:01.088 traddr: 10.0.0.2 00:16:01.088 eflags: none 00:16:01.088 sectype: none 00:16:01.088 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:01.088 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:01.088 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:01.088 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:01.088 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:01.088 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:01.088 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:01.088 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:01.088 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:01.088 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:01.088 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:01.654 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:01.654 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:16:01.654 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:01.654 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:01.654 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:01.654 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:16:04.179 /dev/nvme0n2 ]] 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:04.179 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:04.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:04.438 rmmod nvme_tcp 00:16:04.438 rmmod nvme_fabrics 00:16:04.438 rmmod nvme_keyring 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3150757 ']' 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3150757 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 3150757 ']' 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 3150757 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3150757 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3150757' 00:16:04.438 killing process with pid 3150757 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 3150757 00:16:04.438 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 3150757 00:16:05.008 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:05.008 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:05.008 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:05.008 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:16:05.008 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:16:05.008 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:05.008 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:16:05.008 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:05.008 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:05.008 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.008 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:05.008 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.921 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:06.921 00:16:06.921 real 0m9.952s 00:16:06.921 user 0m18.391s 00:16:06.921 sys 0m3.090s 00:16:06.921 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:06.921 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:06.921 ************************************ 00:16:06.921 END TEST nvmf_nvme_cli 00:16:06.921 ************************************ 00:16:06.921 15:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:16:06.921 15:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:06.921 15:11:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:06.921 15:11:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:06.921 15:11:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:06.921 ************************************ 00:16:06.921 START TEST nvmf_vfio_user 00:16:06.921 ************************************ 00:16:06.921 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:07.182 * Looking for test storage... 00:16:07.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:07.182 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:16:07.182 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1689 -- # lcov --version 00:16:07.182 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:16:07.182 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:16:07.182 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:07.182 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:07.182 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:07.182 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:16:07.182 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:16:07.182 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:16:07.182 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:16:07.182 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:16:07.182 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:16:07.182 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:16:07.182 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:07.182 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:16:07.182 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:16:07.182 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:07.182 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:07.182 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:16:07.182 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:16:07.182 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:07.182 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:16:07.182 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:16:07.182 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:16:07.182 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:16:07.182 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:07.182 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:16:07.182 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:16:07.182 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:07.183 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:16:07.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.183 --rc genhtml_branch_coverage=1 00:16:07.183 --rc genhtml_function_coverage=1 00:16:07.183 --rc genhtml_legend=1 00:16:07.183 --rc geninfo_all_blocks=1 00:16:07.183 --rc geninfo_unexecuted_blocks=1 00:16:07.183 00:16:07.183 ' 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:16:07.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.183 --rc genhtml_branch_coverage=1 00:16:07.183 --rc genhtml_function_coverage=1 00:16:07.183 --rc genhtml_legend=1 00:16:07.183 --rc geninfo_all_blocks=1 00:16:07.183 --rc geninfo_unexecuted_blocks=1 00:16:07.183 00:16:07.183 ' 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:16:07.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.183 --rc genhtml_branch_coverage=1 00:16:07.183 --rc genhtml_function_coverage=1 00:16:07.183 --rc genhtml_legend=1 00:16:07.183 --rc geninfo_all_blocks=1 00:16:07.183 --rc geninfo_unexecuted_blocks=1 00:16:07.183 00:16:07.183 ' 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:16:07.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.183 --rc genhtml_branch_coverage=1 00:16:07.183 --rc genhtml_function_coverage=1 00:16:07.183 --rc genhtml_legend=1 00:16:07.183 --rc geninfo_all_blocks=1 00:16:07.183 --rc geninfo_unexecuted_blocks=1 00:16:07.183 00:16:07.183 ' 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:07.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3151811 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3151811' 00:16:07.183 Process pid: 3151811 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3151811 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 3151811 ']' 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:07.183 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:07.444 [2024-10-28 15:11:54.115488] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:16:07.444 [2024-10-28 15:11:54.115591] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.444 [2024-10-28 15:11:54.274861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:07.702 [2024-10-28 15:11:54.382872] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.702 [2024-10-28 15:11:54.382985] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.702 [2024-10-28 15:11:54.383022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:07.702 [2024-10-28 15:11:54.383057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:07.702 [2024-10-28 15:11:54.383083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.702 [2024-10-28 15:11:54.386298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.702 [2024-10-28 15:11:54.386399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:07.702 [2024-10-28 15:11:54.386491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:07.702 [2024-10-28 15:11:54.386494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.702 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:07.702 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:16:07.702 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:09.076 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:09.076 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:09.076 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:09.076 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:09.076 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:09.334 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:09.593 Malloc1 00:16:09.593 15:11:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:09.875 15:11:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:10.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:10.766 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:10.766 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:10.766 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:11.024 Malloc2 00:16:11.024 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:11.281 15:11:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:11.845 15:11:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:12.102 15:11:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:12.102 15:11:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:12.102 15:11:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:12.102 15:11:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:12.102 15:11:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:12.102 15:11:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:12.102 [2024-10-28 15:11:58.860301] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:16:12.102 [2024-10-28 15:11:58.860396] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3152368 ] 00:16:12.102 [2024-10-28 15:11:58.918628] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:12.102 [2024-10-28 15:11:58.931204] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:12.102 [2024-10-28 15:11:58.931233] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fec9ddff000 00:16:12.102 [2024-10-28 15:11:58.932202] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:12.102 [2024-10-28 15:11:58.933199] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:12.102 [2024-10-28 15:11:58.934203] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:12.102 [2024-10-28 15:11:58.935211] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:12.102 [2024-10-28 15:11:58.936217] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:12.102 [2024-10-28 15:11:58.937224] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:12.102 [2024-10-28 15:11:58.938231] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:12.102 [2024-10-28 15:11:58.939236] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:12.102 [2024-10-28 15:11:58.940243] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:12.102 [2024-10-28 15:11:58.940267] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fec9ddf4000 00:16:12.102 [2024-10-28 15:11:58.941387] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:12.102 [2024-10-28 15:11:58.957340] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:12.102 [2024-10-28 15:11:58.957381] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:16:12.102 [2024-10-28 15:11:58.962366] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:12.102 [2024-10-28 15:11:58.962421] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:12.102 [2024-10-28 15:11:58.962520] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:16:12.102 [2024-10-28 15:11:58.962563] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:16:12.102 [2024-10-28 15:11:58.962575] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:16:12.102 [2024-10-28 15:11:58.963358] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:12.102 [2024-10-28 15:11:58.963379] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:16:12.102 [2024-10-28 15:11:58.963391] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:16:12.102 [2024-10-28 15:11:58.964360] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:12.102 [2024-10-28 15:11:58.964380] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:16:12.102 [2024-10-28 15:11:58.964400] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:12.102 [2024-10-28 15:11:58.965374] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:12.102 [2024-10-28 15:11:58.965396] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:12.102 [2024-10-28 15:11:58.966381] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:12.102 [2024-10-28 15:11:58.966402] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:12.102 [2024-10-28 15:11:58.966412] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:12.102 [2024-10-28 15:11:58.966423] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:12.102 [2024-10-28 15:11:58.966537] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:16:12.102 [2024-10-28 15:11:58.966550] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:12.102 [2024-10-28 15:11:58.966559] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:12.102 [2024-10-28 15:11:58.967394] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:12.102 [2024-10-28 15:11:58.968390] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:12.360 [2024-10-28 15:11:58.969393] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:12.360 [2024-10-28 15:11:58.970398] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:12.360 [2024-10-28 15:11:58.970551] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:12.360 [2024-10-28 15:11:58.971404] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:12.360 [2024-10-28 15:11:58.971424] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:12.360 [2024-10-28 15:11:58.971434] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:12.360 [2024-10-28 15:11:58.971458] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:16:12.360 [2024-10-28 15:11:58.971473] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:12.360 [2024-10-28 15:11:58.971503] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:12.360 [2024-10-28 15:11:58.971513] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:12.360 [2024-10-28 15:11:58.971520] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:12.360 [2024-10-28 15:11:58.971544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:12.360 [2024-10-28 15:11:58.971616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:12.360 [2024-10-28 15:11:58.971659] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:16:12.360 [2024-10-28 15:11:58.971670] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:16:12.360 [2024-10-28 15:11:58.971678] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:16:12.360 [2024-10-28 15:11:58.971688] nvme_ctrlr.c:2072:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:12.360 [2024-10-28 15:11:58.971696] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:16:12.360 [2024-10-28 15:11:58.971706] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:16:12.360 [2024-10-28 15:11:58.971715] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:16:12.360 [2024-10-28 15:11:58.971730] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:12.360 [2024-10-28 15:11:58.971746] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:12.360 [2024-10-28 15:11:58.971764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:12.360 [2024-10-28 15:11:58.971788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.360 [2024-10-28 15:11:58.971802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.360 [2024-10-28 15:11:58.971815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.360 [2024-10-28 15:11:58.971827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.360 [2024-10-28 15:11:58.971835] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:12.360 [2024-10-28 15:11:58.971847] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:12.360 [2024-10-28 15:11:58.971861] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:12.360 [2024-10-28 15:11:58.971873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:12.360 [2024-10-28 15:11:58.971889] nvme_ctrlr.c:3011:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:16:12.360 [2024-10-28 15:11:58.971899] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:12.360 [2024-10-28 15:11:58.971911] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:16:12.360 [2024-10-28 15:11:58.971922] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:12.360 [2024-10-28 15:11:58.971935] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:12.360 [2024-10-28 15:11:58.971965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:12.360 [2024-10-28 15:11:58.972053] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:16:12.360 [2024-10-28 15:11:58.972072] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:12.360 [2024-10-28 15:11:58.972086] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:12.360 [2024-10-28 15:11:58.972094] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:12.360 [2024-10-28 15:11:58.972100] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:12.360 [2024-10-28 15:11:58.972109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:12.360 [2024-10-28 15:11:58.972125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:12.360 [2024-10-28 15:11:58.972145] nvme_ctrlr.c:4699:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:16:12.360 [2024-10-28 15:11:58.972162] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:16:12.360 [2024-10-28 15:11:58.972178] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:12.360 [2024-10-28 15:11:58.972190] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:12.360 [2024-10-28 15:11:58.972198] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:12.360 [2024-10-28 15:11:58.972204] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:12.360 [2024-10-28 15:11:58.972213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:12.360 [2024-10-28 15:11:58.972241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:12.360 [2024-10-28 15:11:58.972267] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:12.360 [2024-10-28 15:11:58.972283] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:12.360 [2024-10-28 15:11:58.972295] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:12.360 [2024-10-28 15:11:58.972302] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:12.360 [2024-10-28 15:11:58.972308] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:12.360 [2024-10-28 15:11:58.972317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:12.360 [2024-10-28 15:11:58.972331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:12.360 [2024-10-28 15:11:58.972347] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:12.360 [2024-10-28 15:11:58.972358] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:12.360 [2024-10-28 15:11:58.972373] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:16:12.360 [2024-10-28 15:11:58.972384] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:12.360 [2024-10-28 15:11:58.972396] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:12.360 [2024-10-28 15:11:58.972405] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:16:12.360 [2024-10-28 15:11:58.972415] nvme_ctrlr.c:3111:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:12.360 [2024-10-28 15:11:58.972422] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:16:12.360 [2024-10-28 15:11:58.972430] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:16:12.361 [2024-10-28 15:11:58.972459] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:12.361 [2024-10-28 15:11:58.972477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:12.361 [2024-10-28 15:11:58.972496] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:12.361 [2024-10-28 15:11:58.972508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:12.361 [2024-10-28 15:11:58.972524] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:12.361 [2024-10-28 15:11:58.972535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:12.361 [2024-10-28 15:11:58.972551] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:12.361 [2024-10-28 15:11:58.972562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:12.361 [2024-10-28 15:11:58.972584] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:12.361 [2024-10-28 15:11:58.972594] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:12.361 [2024-10-28 15:11:58.972600] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:12.361 [2024-10-28 15:11:58.972606] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:12.361 [2024-10-28 15:11:58.972612] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:12.361 [2024-10-28 15:11:58.972621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:12.361 [2024-10-28 15:11:58.972648] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:12.361 [2024-10-28 15:11:58.972670] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:12.361 [2024-10-28 15:11:58.972677] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:12.361 [2024-10-28 15:11:58.972686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:12.361 [2024-10-28 15:11:58.972698] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:12.361 [2024-10-28 15:11:58.972707] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:12.361 [2024-10-28 15:11:58.972713] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:12.361 [2024-10-28 15:11:58.972722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:12.361 [2024-10-28 15:11:58.972753] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:12.361 [2024-10-28 15:11:58.972769] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:12.361 [2024-10-28 15:11:58.972775] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:12.361 [2024-10-28 15:11:58.972785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:12.361 [2024-10-28 15:11:58.972798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:12.361 [2024-10-28 15:11:58.972819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:12.361 [2024-10-28 15:11:58.972838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:12.361 [2024-10-28 15:11:58.972851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:12.361 ===================================================== 00:16:12.361 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:12.361 ===================================================== 00:16:12.361 Controller Capabilities/Features 00:16:12.361 ================================ 00:16:12.361 Vendor ID: 4e58 00:16:12.361 Subsystem Vendor ID: 4e58 00:16:12.361 Serial Number: SPDK1 00:16:12.361 Model Number: SPDK bdev Controller 00:16:12.361 Firmware Version: 25.01 00:16:12.361 Recommended Arb Burst: 6 00:16:12.361 IEEE OUI Identifier: 8d 6b 50 00:16:12.361 Multi-path I/O 00:16:12.361 May have multiple subsystem ports: Yes 00:16:12.361 May have multiple controllers: Yes 00:16:12.361 Associated with SR-IOV VF: No 00:16:12.361 Max Data Transfer Size: 131072 00:16:12.361 Max Number of Namespaces: 32 00:16:12.361 Max Number of I/O Queues: 127 00:16:12.361 NVMe Specification Version (VS): 1.3 00:16:12.361 NVMe Specification Version (Identify): 1.3 00:16:12.361 Maximum Queue Entries: 256 00:16:12.361 Contiguous Queues Required: Yes 00:16:12.361 Arbitration Mechanisms Supported 00:16:12.361 Weighted Round Robin: Not Supported 00:16:12.361 Vendor Specific: Not Supported 00:16:12.361 Reset Timeout: 15000 ms 00:16:12.361 Doorbell Stride: 4 bytes 00:16:12.361 NVM Subsystem Reset: Not Supported 00:16:12.361 Command Sets Supported 00:16:12.361 NVM Command Set: Supported 00:16:12.361 Boot Partition: Not Supported 00:16:12.361 Memory Page Size Minimum: 4096 bytes 00:16:12.361 Memory Page Size Maximum: 4096 bytes 00:16:12.361 Persistent Memory Region: Not Supported 00:16:12.361 Optional Asynchronous Events Supported 00:16:12.361 Namespace Attribute Notices: Supported 00:16:12.361 Firmware Activation Notices: Not Supported 00:16:12.361 ANA Change Notices: Not Supported 00:16:12.361 PLE Aggregate Log Change Notices: Not Supported 00:16:12.361 LBA Status Info Alert Notices: Not Supported 00:16:12.361 EGE Aggregate Log Change Notices: Not Supported 00:16:12.361 Normal NVM Subsystem Shutdown event: Not Supported 00:16:12.361 Zone Descriptor Change Notices: Not Supported 00:16:12.361 Discovery Log Change Notices: Not Supported 00:16:12.361 Controller Attributes 00:16:12.361 128-bit Host Identifier: Supported 00:16:12.361 Non-Operational Permissive Mode: Not Supported 00:16:12.361 NVM Sets: Not Supported 00:16:12.361 Read Recovery Levels: Not Supported 00:16:12.361 Endurance Groups: Not Supported 00:16:12.361 Predictable Latency Mode: Not Supported 00:16:12.361 Traffic Based Keep ALive: Not Supported 00:16:12.361 Namespace Granularity: Not Supported 00:16:12.361 SQ Associations: Not Supported 00:16:12.361 UUID List: Not Supported 00:16:12.361 Multi-Domain Subsystem: Not Supported 00:16:12.361 Fixed Capacity Management: Not Supported 00:16:12.361 Variable Capacity Management: Not Supported 00:16:12.361 Delete Endurance Group: Not Supported 00:16:12.361 Delete NVM Set: Not Supported 00:16:12.361 Extended LBA Formats Supported: Not Supported 00:16:12.361 Flexible Data Placement Supported: Not Supported 00:16:12.361 00:16:12.361 Controller Memory Buffer Support 00:16:12.361 ================================ 00:16:12.361 Supported: No 00:16:12.361 00:16:12.361 Persistent Memory Region Support 00:16:12.361 ================================ 00:16:12.361 Supported: No 00:16:12.361 00:16:12.361 Admin Command Set Attributes 00:16:12.361 ============================ 00:16:12.361 Security Send/Receive: Not Supported 00:16:12.361 Format NVM: Not Supported 00:16:12.361 Firmware Activate/Download: Not Supported 00:16:12.361 Namespace Management: Not Supported 00:16:12.361 Device Self-Test: Not Supported 00:16:12.361 Directives: Not Supported 00:16:12.361 NVMe-MI: Not Supported 00:16:12.361 Virtualization Management: Not Supported 00:16:12.361 Doorbell Buffer Config: Not Supported 00:16:12.361 Get LBA Status Capability: Not Supported 00:16:12.361 Command & Feature Lockdown Capability: Not Supported 00:16:12.361 Abort Command Limit: 4 00:16:12.361 Async Event Request Limit: 4 00:16:12.361 Number of Firmware Slots: N/A 00:16:12.361 Firmware Slot 1 Read-Only: N/A 00:16:12.361 Firmware Activation Without Reset: N/A 00:16:12.361 Multiple Update Detection Support: N/A 00:16:12.361 Firmware Update Granularity: No Information Provided 00:16:12.361 Per-Namespace SMART Log: No 00:16:12.361 Asymmetric Namespace Access Log Page: Not Supported 00:16:12.361 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:12.361 Command Effects Log Page: Supported 00:16:12.361 Get Log Page Extended Data: Supported 00:16:12.361 Telemetry Log Pages: Not Supported 00:16:12.361 Persistent Event Log Pages: Not Supported 00:16:12.361 Supported Log Pages Log Page: May Support 00:16:12.361 Commands Supported & Effects Log Page: Not Supported 00:16:12.361 Feature Identifiers & Effects Log Page:May Support 00:16:12.361 NVMe-MI Commands & Effects Log Page: May Support 00:16:12.361 Data Area 4 for Telemetry Log: Not Supported 00:16:12.361 Error Log Page Entries Supported: 128 00:16:12.361 Keep Alive: Supported 00:16:12.361 Keep Alive Granularity: 10000 ms 00:16:12.361 00:16:12.361 NVM Command Set Attributes 00:16:12.361 ========================== 00:16:12.361 Submission Queue Entry Size 00:16:12.361 Max: 64 00:16:12.361 Min: 64 00:16:12.361 Completion Queue Entry Size 00:16:12.361 Max: 16 00:16:12.361 Min: 16 00:16:12.361 Number of Namespaces: 32 00:16:12.361 Compare Command: Supported 00:16:12.361 Write Uncorrectable Command: Not Supported 00:16:12.361 Dataset Management Command: Supported 00:16:12.361 Write Zeroes Command: Supported 00:16:12.361 Set Features Save Field: Not Supported 00:16:12.361 Reservations: Not Supported 00:16:12.361 Timestamp: Not Supported 00:16:12.361 Copy: Supported 00:16:12.361 Volatile Write Cache: Present 00:16:12.361 Atomic Write Unit (Normal): 1 00:16:12.361 Atomic Write Unit (PFail): 1 00:16:12.361 Atomic Compare & Write Unit: 1 00:16:12.361 Fused Compare & Write: Supported 00:16:12.361 Scatter-Gather List 00:16:12.361 SGL Command Set: Supported (Dword aligned) 00:16:12.361 SGL Keyed: Not Supported 00:16:12.361 SGL Bit Bucket Descriptor: Not Supported 00:16:12.361 SGL Metadata Pointer: Not Supported 00:16:12.361 Oversized SGL: Not Supported 00:16:12.361 SGL Metadata Address: Not Supported 00:16:12.361 SGL Offset: Not Supported 00:16:12.361 Transport SGL Data Block: Not Supported 00:16:12.361 Replay Protected Memory Block: Not Supported 00:16:12.361 00:16:12.361 Firmware Slot Information 00:16:12.361 ========================= 00:16:12.361 Active slot: 1 00:16:12.361 Slot 1 Firmware Revision: 25.01 00:16:12.361 00:16:12.361 00:16:12.361 Commands Supported and Effects 00:16:12.361 ============================== 00:16:12.361 Admin Commands 00:16:12.361 -------------- 00:16:12.361 Get Log Page (02h): Supported 00:16:12.361 Identify (06h): Supported 00:16:12.361 Abort (08h): Supported 00:16:12.361 Set Features (09h): Supported 00:16:12.361 Get Features (0Ah): Supported 00:16:12.361 Asynchronous Event Request (0Ch): Supported 00:16:12.361 Keep Alive (18h): Supported 00:16:12.361 I/O Commands 00:16:12.361 ------------ 00:16:12.361 Flush (00h): Supported LBA-Change 00:16:12.361 Write (01h): Supported LBA-Change 00:16:12.361 Read (02h): Supported 00:16:12.361 Compare (05h): Supported 00:16:12.361 Write Zeroes (08h): Supported LBA-Change 00:16:12.361 Dataset Management (09h): Supported LBA-Change 00:16:12.361 Copy (19h): Supported LBA-Change 00:16:12.361 00:16:12.361 Error Log 00:16:12.361 ========= 00:16:12.361 00:16:12.361 Arbitration 00:16:12.361 =========== 00:16:12.361 Arbitration Burst: 1 00:16:12.361 00:16:12.361 Power Management 00:16:12.361 ================ 00:16:12.361 Number of Power States: 1 00:16:12.361 Current Power State: Power State #0 00:16:12.361 Power State #0: 00:16:12.361 Max Power: 0.00 W 00:16:12.361 Non-Operational State: Operational 00:16:12.361 Entry Latency: Not Reported 00:16:12.361 Exit Latency: Not Reported 00:16:12.361 Relative Read Throughput: 0 00:16:12.361 Relative Read Latency: 0 00:16:12.361 Relative Write Throughput: 0 00:16:12.361 Relative Write Latency: 0 00:16:12.361 Idle Power: Not Reported 00:16:12.361 Active Power: Not Reported 00:16:12.361 Non-Operational Permissive Mode: Not Supported 00:16:12.361 00:16:12.361 Health Information 00:16:12.361 ================== 00:16:12.361 Critical Warnings: 00:16:12.361 Available Spare Space: OK 00:16:12.361 Temperature: OK 00:16:12.361 Device Reliability: OK 00:16:12.361 Read Only: No 00:16:12.361 Volatile Memory Backup: OK 00:16:12.361 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:12.361 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:12.361 Available Spare: 0% 00:16:12.361 Available Sp[2024-10-28 15:11:58.972986] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:12.361 [2024-10-28 15:11:58.973017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:12.361 [2024-10-28 15:11:58.973063] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:16:12.361 [2024-10-28 15:11:58.973081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.361 [2024-10-28 15:11:58.973092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.361 [2024-10-28 15:11:58.973102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.361 [2024-10-28 15:11:58.973111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.361 [2024-10-28 15:11:58.976662] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:12.361 [2024-10-28 15:11:58.976687] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:12.361 [2024-10-28 15:11:58.977426] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:12.361 [2024-10-28 15:11:58.977518] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:16:12.361 [2024-10-28 15:11:58.977532] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:16:12.361 [2024-10-28 15:11:58.978438] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:12.361 [2024-10-28 15:11:58.978461] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:16:12.361 [2024-10-28 15:11:58.978518] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:12.361 [2024-10-28 15:11:58.980477] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:12.361 are Threshold: 0% 00:16:12.361 Life Percentage Used: 0% 00:16:12.361 Data Units Read: 0 00:16:12.361 Data Units Written: 0 00:16:12.361 Host Read Commands: 0 00:16:12.361 Host Write Commands: 0 00:16:12.361 Controller Busy Time: 0 minutes 00:16:12.361 Power Cycles: 0 00:16:12.361 Power On Hours: 0 hours 00:16:12.361 Unsafe Shutdowns: 0 00:16:12.361 Unrecoverable Media Errors: 0 00:16:12.361 Lifetime Error Log Entries: 0 00:16:12.361 Warning Temperature Time: 0 minutes 00:16:12.361 Critical Temperature Time: 0 minutes 00:16:12.361 00:16:12.361 Number of Queues 00:16:12.361 ================ 00:16:12.361 Number of I/O Submission Queues: 127 00:16:12.361 Number of I/O Completion Queues: 127 00:16:12.361 00:16:12.361 Active Namespaces 00:16:12.361 ================= 00:16:12.361 Namespace ID:1 00:16:12.361 Error Recovery Timeout: Unlimited 00:16:12.361 Command Set Identifier: NVM (00h) 00:16:12.361 Deallocate: Supported 00:16:12.361 Deallocated/Unwritten Error: Not Supported 00:16:12.361 Deallocated Read Value: Unknown 00:16:12.361 Deallocate in Write Zeroes: Not Supported 00:16:12.361 Deallocated Guard Field: 0xFFFF 00:16:12.361 Flush: Supported 00:16:12.361 Reservation: Supported 00:16:12.361 Namespace Sharing Capabilities: Multiple Controllers 00:16:12.361 Size (in LBAs): 131072 (0GiB) 00:16:12.362 Capacity (in LBAs): 131072 (0GiB) 00:16:12.362 Utilization (in LBAs): 131072 (0GiB) 00:16:12.362 NGUID: 421D06F72AAD498F978C549374B5B53B 00:16:12.362 UUID: 421d06f7-2aad-498f-978c-549374b5b53b 00:16:12.362 Thin Provisioning: Not Supported 00:16:12.362 Per-NS Atomic Units: Yes 00:16:12.362 Atomic Boundary Size (Normal): 0 00:16:12.362 Atomic Boundary Size (PFail): 0 00:16:12.362 Atomic Boundary Offset: 0 00:16:12.362 Maximum Single Source Range Length: 65535 00:16:12.362 Maximum Copy Length: 65535 00:16:12.362 Maximum Source Range Count: 1 00:16:12.362 NGUID/EUI64 Never Reused: No 00:16:12.362 Namespace Write Protected: No 00:16:12.362 Number of LBA Formats: 1 00:16:12.362 Current LBA Format: LBA Format #00 00:16:12.362 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:12.362 00:16:12.362 15:11:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:12.618 [2024-10-28 15:11:59.292730] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:17.902 Initializing NVMe Controllers 00:16:17.902 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:17.902 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:17.902 Initialization complete. Launching workers. 00:16:17.902 ======================================================== 00:16:17.902 Latency(us) 00:16:17.902 Device Information : IOPS MiB/s Average min max 00:16:17.902 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 32490.39 126.92 3941.35 1190.42 9244.02 00:16:17.902 ======================================================== 00:16:17.902 Total : 32490.39 126.92 3941.35 1190.42 9244.02 00:16:17.902 00:16:17.902 [2024-10-28 15:12:04.314513] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:17.902 15:12:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:17.902 [2024-10-28 15:12:04.573704] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:23.162 Initializing NVMe Controllers 00:16:23.162 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:23.162 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:23.162 Initialization complete. Launching workers. 00:16:23.162 ======================================================== 00:16:23.162 Latency(us) 00:16:23.162 Device Information : IOPS MiB/s Average min max 00:16:23.162 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16000.00 62.50 8009.94 6980.89 15960.48 00:16:23.162 ======================================================== 00:16:23.162 Total : 16000.00 62.50 8009.94 6980.89 15960.48 00:16:23.162 00:16:23.162 [2024-10-28 15:12:09.612848] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:23.162 15:12:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:23.162 [2024-10-28 15:12:09.868091] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:28.426 [2024-10-28 15:12:14.948112] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:28.426 Initializing NVMe Controllers 00:16:28.426 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:28.426 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:28.426 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:28.426 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:28.426 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:28.426 Initialization complete. Launching workers. 00:16:28.426 Starting thread on core 2 00:16:28.426 Starting thread on core 3 00:16:28.426 Starting thread on core 1 00:16:28.426 15:12:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:28.685 [2024-10-28 15:12:15.351090] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:31.968 [2024-10-28 15:12:18.748976] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:31.968 Initializing NVMe Controllers 00:16:31.968 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:31.968 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:31.968 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:31.968 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:31.968 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:31.968 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:31.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:31.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:31.968 Initialization complete. Launching workers. 00:16:31.968 Starting thread on core 1 with urgent priority queue 00:16:31.968 Starting thread on core 2 with urgent priority queue 00:16:31.968 Starting thread on core 3 with urgent priority queue 00:16:31.968 Starting thread on core 0 with urgent priority queue 00:16:31.968 SPDK bdev Controller (SPDK1 ) core 0: 2766.67 IO/s 36.14 secs/100000 ios 00:16:31.968 SPDK bdev Controller (SPDK1 ) core 1: 3031.67 IO/s 32.99 secs/100000 ios 00:16:31.968 SPDK bdev Controller (SPDK1 ) core 2: 2336.67 IO/s 42.80 secs/100000 ios 00:16:31.968 SPDK bdev Controller (SPDK1 ) core 3: 2909.33 IO/s 34.37 secs/100000 ios 00:16:31.968 ======================================================== 00:16:31.968 00:16:31.968 15:12:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:32.226 [2024-10-28 15:12:19.061729] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:32.483 Initializing NVMe Controllers 00:16:32.483 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:32.483 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:32.483 Namespace ID: 1 size: 0GB 00:16:32.483 Initialization complete. 00:16:32.483 INFO: using host memory buffer for IO 00:16:32.483 Hello world! 00:16:32.483 [2024-10-28 15:12:19.097453] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:32.483 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:32.740 [2024-10-28 15:12:19.502087] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:33.673 Initializing NVMe Controllers 00:16:33.673 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:33.673 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:33.673 Initialization complete. Launching workers. 00:16:33.673 submit (in ns) avg, min, max = 8008.3, 3541.1, 4016982.2 00:16:33.673 complete (in ns) avg, min, max = 25270.8, 2066.7, 5012824.4 00:16:33.673 00:16:33.673 Submit histogram 00:16:33.673 ================ 00:16:33.673 Range in us Cumulative Count 00:16:33.673 3.532 - 3.556: 0.0155% ( 2) 00:16:33.673 3.556 - 3.579: 0.0854% ( 9) 00:16:33.673 3.579 - 3.603: 0.8774% ( 102) 00:16:33.673 3.603 - 3.627: 2.8574% ( 255) 00:16:33.673 3.627 - 3.650: 7.8267% ( 640) 00:16:33.673 3.650 - 3.674: 15.3117% ( 964) 00:16:33.673 3.674 - 3.698: 25.6775% ( 1335) 00:16:33.673 3.698 - 3.721: 35.3366% ( 1244) 00:16:33.673 3.721 - 3.745: 44.3513% ( 1161) 00:16:33.673 3.745 - 3.769: 49.6933% ( 688) 00:16:33.673 3.769 - 3.793: 55.4003% ( 735) 00:16:33.673 3.793 - 3.816: 59.9270% ( 583) 00:16:33.673 3.816 - 3.840: 64.5547% ( 596) 00:16:33.673 3.840 - 3.864: 68.3438% ( 488) 00:16:33.673 3.864 - 3.887: 71.5661% ( 415) 00:16:33.673 3.887 - 3.911: 75.2232% ( 471) 00:16:33.673 3.911 - 3.935: 79.2298% ( 516) 00:16:33.673 3.935 - 3.959: 82.3045% ( 396) 00:16:33.673 3.959 - 3.982: 84.7892% ( 320) 00:16:33.673 3.982 - 4.006: 86.9167% ( 274) 00:16:33.673 4.006 - 4.030: 88.6094% ( 218) 00:16:33.673 4.030 - 4.053: 90.3253% ( 221) 00:16:33.673 4.053 - 4.077: 91.7152% ( 179) 00:16:33.673 4.077 - 4.101: 92.8022% ( 140) 00:16:33.673 4.101 - 4.124: 93.6408% ( 108) 00:16:33.673 4.124 - 4.148: 94.2232% ( 75) 00:16:33.673 4.148 - 4.172: 94.4949% ( 35) 00:16:33.673 4.172 - 4.196: 94.7822% ( 37) 00:16:33.673 4.196 - 4.219: 95.0229% ( 31) 00:16:33.673 4.219 - 4.243: 95.2403% ( 28) 00:16:33.673 4.243 - 4.267: 95.3878% ( 19) 00:16:33.673 4.267 - 4.290: 95.4965% ( 14) 00:16:33.673 4.290 - 4.314: 95.6052% ( 14) 00:16:33.673 4.314 - 4.338: 95.7295% ( 16) 00:16:33.673 4.338 - 4.361: 95.8770% ( 19) 00:16:33.673 4.361 - 4.385: 95.9469% ( 9) 00:16:33.673 4.385 - 4.409: 95.9857% ( 5) 00:16:33.673 4.409 - 4.433: 96.0323% ( 6) 00:16:33.673 4.433 - 4.456: 96.0711% ( 5) 00:16:33.673 4.456 - 4.480: 96.1099% ( 5) 00:16:33.673 4.480 - 4.504: 96.1332% ( 3) 00:16:33.673 4.504 - 4.527: 96.1565% ( 3) 00:16:33.673 4.527 - 4.551: 96.1954% ( 5) 00:16:33.673 4.551 - 4.575: 96.2419% ( 6) 00:16:33.673 4.575 - 4.599: 96.2730% ( 4) 00:16:33.673 4.622 - 4.646: 96.2808% ( 1) 00:16:33.673 4.646 - 4.670: 96.3118% ( 4) 00:16:33.673 4.670 - 4.693: 96.3429% ( 4) 00:16:33.673 4.693 - 4.717: 96.3584% ( 2) 00:16:33.673 4.717 - 4.741: 96.4128% ( 7) 00:16:33.673 4.741 - 4.764: 96.4361% ( 3) 00:16:33.673 4.764 - 4.788: 96.4826% ( 6) 00:16:33.673 4.788 - 4.812: 96.5215% ( 5) 00:16:33.673 4.812 - 4.836: 96.5448% ( 3) 00:16:33.673 4.836 - 4.859: 96.5836% ( 5) 00:16:33.673 4.859 - 4.883: 96.6379% ( 7) 00:16:33.673 4.883 - 4.907: 96.7001% ( 8) 00:16:33.673 4.907 - 4.930: 96.7699% ( 9) 00:16:33.673 4.930 - 4.954: 96.8321% ( 8) 00:16:33.673 4.954 - 4.978: 96.9019% ( 9) 00:16:33.673 4.978 - 5.001: 96.9563% ( 7) 00:16:33.673 5.001 - 5.025: 97.0417% ( 11) 00:16:33.673 5.025 - 5.049: 97.0883% ( 6) 00:16:33.673 5.049 - 5.073: 97.1271% ( 5) 00:16:33.673 5.073 - 5.096: 97.1970% ( 9) 00:16:33.673 5.096 - 5.120: 97.2280% ( 4) 00:16:33.673 5.120 - 5.144: 97.2824% ( 7) 00:16:33.673 5.144 - 5.167: 97.3367% ( 7) 00:16:33.673 5.167 - 5.191: 97.3523% ( 2) 00:16:33.673 5.191 - 5.215: 97.3678% ( 2) 00:16:33.673 5.215 - 5.239: 97.3833% ( 2) 00:16:33.673 5.239 - 5.262: 97.4222% ( 5) 00:16:33.673 5.262 - 5.286: 97.4377% ( 2) 00:16:33.673 5.286 - 5.310: 97.4532% ( 2) 00:16:33.673 5.310 - 5.333: 97.4843% ( 4) 00:16:33.673 5.333 - 5.357: 97.5076% ( 3) 00:16:33.673 5.357 - 5.381: 97.5386% ( 4) 00:16:33.673 5.428 - 5.452: 97.5464% ( 1) 00:16:33.673 5.452 - 5.476: 97.5542% ( 1) 00:16:33.673 5.499 - 5.523: 97.5619% ( 1) 00:16:33.674 5.523 - 5.547: 97.5775% ( 2) 00:16:33.674 5.547 - 5.570: 97.5852% ( 1) 00:16:33.674 5.618 - 5.641: 97.5930% ( 1) 00:16:33.674 5.641 - 5.665: 97.6085% ( 2) 00:16:33.674 5.665 - 5.689: 97.6240% ( 2) 00:16:33.674 5.689 - 5.713: 97.6396% ( 2) 00:16:33.674 5.736 - 5.760: 97.6629% ( 3) 00:16:33.674 5.760 - 5.784: 97.6706% ( 1) 00:16:33.674 5.784 - 5.807: 97.6862% ( 2) 00:16:33.674 5.879 - 5.902: 97.6939% ( 1) 00:16:33.674 5.902 - 5.926: 97.7017% ( 1) 00:16:33.674 5.950 - 5.973: 97.7094% ( 1) 00:16:33.674 6.163 - 6.210: 97.7172% ( 1) 00:16:33.674 6.305 - 6.353: 97.7250% ( 1) 00:16:33.674 6.353 - 6.400: 97.7405% ( 2) 00:16:33.674 6.495 - 6.542: 97.7483% ( 1) 00:16:33.674 6.590 - 6.637: 97.7560% ( 1) 00:16:33.674 6.637 - 6.684: 97.7638% ( 1) 00:16:33.674 6.827 - 6.874: 97.7716% ( 1) 00:16:33.674 6.874 - 6.921: 97.7793% ( 1) 00:16:33.674 6.921 - 6.969: 97.7871% ( 1) 00:16:33.674 7.016 - 7.064: 97.7949% ( 1) 00:16:33.674 7.064 - 7.111: 97.8104% ( 2) 00:16:33.674 7.111 - 7.159: 97.8182% ( 1) 00:16:33.674 7.253 - 7.301: 97.8259% ( 1) 00:16:33.674 7.348 - 7.396: 97.8414% ( 2) 00:16:33.674 7.396 - 7.443: 97.8492% ( 1) 00:16:33.674 7.443 - 7.490: 97.8647% ( 2) 00:16:33.674 7.585 - 7.633: 97.8725% ( 1) 00:16:33.674 7.633 - 7.680: 97.8803% ( 1) 00:16:33.674 7.680 - 7.727: 97.8880% ( 1) 00:16:33.674 7.727 - 7.775: 97.9036% ( 2) 00:16:33.674 7.775 - 7.822: 97.9269% ( 3) 00:16:33.674 7.822 - 7.870: 97.9502% ( 3) 00:16:33.674 7.870 - 7.917: 97.9579% ( 1) 00:16:33.674 7.917 - 7.964: 97.9657% ( 1) 00:16:33.674 7.964 - 8.012: 97.9812% ( 2) 00:16:33.674 8.059 - 8.107: 97.9967% ( 2) 00:16:33.674 8.107 - 8.154: 98.0045% ( 1) 00:16:33.674 8.154 - 8.201: 98.0200% ( 2) 00:16:33.674 8.249 - 8.296: 98.0356% ( 2) 00:16:33.674 8.296 - 8.344: 98.0433% ( 1) 00:16:33.674 8.344 - 8.391: 98.0589% ( 2) 00:16:33.674 8.391 - 8.439: 98.0899% ( 4) 00:16:33.674 8.439 - 8.486: 98.0977% ( 1) 00:16:33.674 8.628 - 8.676: 98.1132% ( 2) 00:16:33.674 8.676 - 8.723: 98.1365% ( 3) 00:16:33.674 8.723 - 8.770: 98.1520% ( 2) 00:16:33.674 8.770 - 8.818: 98.1753% ( 3) 00:16:33.674 8.818 - 8.865: 98.1986% ( 3) 00:16:33.674 8.865 - 8.913: 98.2141% ( 2) 00:16:33.674 9.055 - 9.102: 98.2374% ( 3) 00:16:33.674 9.102 - 9.150: 98.2452% ( 1) 00:16:33.674 9.150 - 9.197: 98.2530% ( 1) 00:16:33.674 9.197 - 9.244: 98.2607% ( 1) 00:16:33.674 9.244 - 9.292: 98.2840% ( 3) 00:16:33.674 9.292 - 9.339: 98.2996% ( 2) 00:16:33.674 9.339 - 9.387: 98.3073% ( 1) 00:16:33.674 9.387 - 9.434: 98.3229% ( 2) 00:16:33.674 9.434 - 9.481: 98.3384% ( 2) 00:16:33.674 9.481 - 9.529: 98.3461% ( 1) 00:16:33.674 9.529 - 9.576: 98.3539% ( 1) 00:16:33.674 9.576 - 9.624: 98.3617% ( 1) 00:16:33.674 9.766 - 9.813: 98.3772% ( 2) 00:16:33.674 9.813 - 9.861: 98.4005% ( 3) 00:16:33.674 9.956 - 10.003: 98.4083% ( 1) 00:16:33.674 10.098 - 10.145: 98.4316% ( 3) 00:16:33.674 10.145 - 10.193: 98.4471% ( 2) 00:16:33.674 10.287 - 10.335: 98.4548% ( 1) 00:16:33.674 10.430 - 10.477: 98.4781% ( 3) 00:16:33.674 10.477 - 10.524: 98.4937% ( 2) 00:16:33.674 10.524 - 10.572: 98.5014% ( 1) 00:16:33.674 10.572 - 10.619: 98.5170% ( 2) 00:16:33.674 10.619 - 10.667: 98.5247% ( 1) 00:16:33.674 10.667 - 10.714: 98.5403% ( 2) 00:16:33.674 10.809 - 10.856: 98.5480% ( 1) 00:16:33.674 10.999 - 11.046: 98.5558% ( 1) 00:16:33.674 11.046 - 11.093: 98.5636% ( 1) 00:16:33.674 11.188 - 11.236: 98.5713% ( 1) 00:16:33.674 11.236 - 11.283: 98.5868% ( 2) 00:16:33.674 11.283 - 11.330: 98.5946% ( 1) 00:16:33.674 11.378 - 11.425: 98.6024% ( 1) 00:16:33.674 11.567 - 11.615: 98.6179% ( 2) 00:16:33.674 11.899 - 11.947: 98.6334% ( 2) 00:16:33.674 12.041 - 12.089: 98.6412% ( 1) 00:16:33.674 12.089 - 12.136: 98.6567% ( 2) 00:16:33.674 12.136 - 12.231: 98.6723% ( 2) 00:16:33.674 12.231 - 12.326: 98.6956% ( 3) 00:16:33.674 12.326 - 12.421: 98.7111% ( 2) 00:16:33.674 12.421 - 12.516: 98.7266% ( 2) 00:16:33.674 12.516 - 12.610: 98.7344% ( 1) 00:16:33.674 12.800 - 12.895: 98.7577% ( 3) 00:16:33.674 12.990 - 13.084: 98.7654% ( 1) 00:16:33.674 13.084 - 13.179: 98.7732% ( 1) 00:16:33.674 13.274 - 13.369: 98.7965% ( 3) 00:16:33.674 13.369 - 13.464: 98.8120% ( 2) 00:16:33.674 13.464 - 13.559: 98.8198% ( 1) 00:16:33.674 13.559 - 13.653: 98.8275% ( 1) 00:16:33.674 13.653 - 13.748: 98.8353% ( 1) 00:16:33.674 13.748 - 13.843: 98.8741% ( 5) 00:16:33.674 13.938 - 14.033: 98.8819% ( 1) 00:16:33.674 14.033 - 14.127: 98.8897% ( 1) 00:16:33.674 14.127 - 14.222: 98.9052% ( 2) 00:16:33.674 14.222 - 14.317: 98.9207% ( 2) 00:16:33.674 14.317 - 14.412: 98.9285% ( 1) 00:16:33.674 14.412 - 14.507: 98.9363% ( 1) 00:16:33.674 14.507 - 14.601: 98.9518% ( 2) 00:16:33.674 14.601 - 14.696: 98.9595% ( 1) 00:16:33.674 14.791 - 14.886: 98.9673% ( 1) 00:16:33.674 14.981 - 15.076: 98.9828% ( 2) 00:16:33.674 15.739 - 15.834: 98.9906% ( 1) 00:16:33.674 17.256 - 17.351: 98.9984% ( 1) 00:16:33.674 17.351 - 17.446: 99.0061% ( 1) 00:16:33.674 17.446 - 17.541: 99.0294% ( 3) 00:16:33.674 17.541 - 17.636: 99.0527% ( 3) 00:16:33.674 17.636 - 17.730: 99.1304% ( 10) 00:16:33.674 17.730 - 17.825: 99.1770% ( 6) 00:16:33.674 17.825 - 17.920: 99.2080% ( 4) 00:16:33.674 17.920 - 18.015: 99.2857% ( 10) 00:16:33.674 18.015 - 18.110: 99.3400% ( 7) 00:16:33.674 18.110 - 18.204: 99.3788% ( 5) 00:16:33.674 18.204 - 18.299: 99.4798% ( 13) 00:16:33.674 18.299 - 18.394: 99.5108% ( 4) 00:16:33.674 18.394 - 18.489: 99.5885% ( 10) 00:16:33.674 18.489 - 18.584: 99.6506% ( 8) 00:16:33.674 18.584 - 18.679: 99.6817% ( 4) 00:16:33.674 18.679 - 18.773: 99.7360% ( 7) 00:16:33.674 18.773 - 18.868: 99.7438% ( 1) 00:16:33.674 18.868 - 18.963: 99.7981% ( 7) 00:16:33.674 18.963 - 19.058: 99.8137% ( 2) 00:16:33.674 19.153 - 19.247: 99.8214% ( 1) 00:16:33.674 19.247 - 19.342: 99.8292% ( 1) 00:16:33.674 19.342 - 19.437: 99.8369% ( 1) 00:16:33.674 20.196 - 20.290: 99.8447% ( 1) 00:16:33.674 21.902 - 21.997: 99.8525% ( 1) 00:16:33.674 22.281 - 22.376: 99.8602% ( 1) 00:16:33.674 23.609 - 23.704: 99.8680% ( 1) 00:16:33.674 23.893 - 23.988: 99.8758% ( 1) 00:16:33.674 25.031 - 25.221: 99.8835% ( 1) 00:16:33.674 25.600 - 25.790: 99.8913% ( 1) 00:16:33.674 30.341 - 30.530: 99.8991% ( 1) 00:16:33.674 3131.164 - 3155.437: 99.9068% ( 1) 00:16:33.674 3980.705 - 4004.978: 99.9767% ( 9) 00:16:33.674 4004.978 - 4029.250: 100.0000% ( 3) 00:16:33.674 00:16:33.674 Complete histogram 00:16:33.674 ================== 00:16:33.674 Range in us Cumulative Count 00:16:33.674 2.062 - 2.074: 4.0919% ( 527) 00:16:33.674 2.074 - 2.086: 26.5626% ( 2894) 00:16:33.674 2.086 - 2.098: 29.5597% ( 386) 00:16:33.674 2.098 - 2.110: 41.2221% ( 1502) 00:16:33.674 2.110 - 2.121: 55.0198% ( 1777) 00:16:33.674 2.121 - 2.133: 56.8755% ( 239) 00:16:33.674 2.133 - 2.145: 64.3373% ( 961) 00:16:33.674 2.145 - 2.157: 72.4590% ( 1046) 00:16:33.674 2.157 - 2.169: 73.5538% ( 141) 00:16:33.674 2.169 - 2.181: 80.1615% ( 851) 00:16:33.674 2.181 - 2.193: 84.5019% ( 559) 00:16:33.674 2.193 - 2.204: 85.3482% ( 109) 00:16:33.674 2.204 - 2.216: 87.1962% ( 238) 00:16:33.674 2.216 - 2.228: 89.1762% ( 255) 00:16:33.674 2.228 - 2.240: 90.7291% ( 200) 00:16:33.674 2.240 - 2.252: 92.3985% ( 215) 00:16:33.674 2.252 - 2.264: 93.4700% ( 138) 00:16:33.674 2.264 - 2.276: 93.7728% ( 39) 00:16:33.675 2.276 - 2.287: 94.0290% ( 33) 00:16:33.675 2.287 - 2.299: 94.3862% ( 46) 00:16:33.675 2.299 - 2.311: 94.7201% ( 43) 00:16:33.675 2.311 - 2.323: 95.0229% ( 39) 00:16:33.675 2.323 - 2.335: 95.0773% ( 7) 00:16:33.675 2.335 - 2.347: 95.1161% ( 5) 00:16:33.675 2.347 - 2.359: 95.1937% ( 10) 00:16:33.675 2.359 - 2.370: 95.3490% ( 20) 00:16:33.675 2.370 - 2.382: 95.6052% ( 33) 00:16:33.675 2.382 - 2.394: 95.9314% ( 42) 00:16:33.675 2.394 - 2.406: 96.1410% ( 27) 00:16:33.675 2.406 - 2.418: 96.3351% ( 25) 00:16:33.675 2.418 - 2.430: 96.5991% ( 34) 00:16:33.675 2.430 - 2.441: 96.7777% ( 23) 00:16:33.675 2.441 - 2.453: 96.9485% ( 22) 00:16:33.675 2.453 - 2.465: 97.1271% ( 23) 00:16:33.675 2.465 - 2.477: 97.2436% ( 15) 00:16:33.675 2.477 - 2.489: 97.3911% ( 19) 00:16:33.675 2.489 - 2.501: 97.4377% ( 6) 00:16:33.675 2.501 - 2.513: 97.4920% ( 7) 00:16:33.675 2.513 - 2.524: 97.5775% ( 11) 00:16:33.675 2.524 - 2.536: 97.6396% ( 8) 00:16:33.675 2.536 - 2.548: 97.6706% ( 4) 00:16:33.675 2.548 - 2.560: 97.7017% ( 4) 00:16:33.675 2.560 - 2.572: 97.7405% ( 5) 00:16:33.675 2.572 - 2.584: 97.7560% ( 2) 00:16:33.675 2.584 - 2.596: 97.7638% ( 1) 00:16:33.675 2.596 - 2.607: 97.7793% ( 2) 00:16:33.675 2.607 - 2.619: 97.8182% ( 5) 00:16:33.675 2.619 - 2.631: 97.8414% ( 3) 00:16:33.675 2.631 - 2.643: 97.8492% ( 1) 00:16:33.675 2.655 - 2.667: 97.8647% ( 2) 00:16:33.675 2.667 - 2.679: 97.8880% ( 3) 00:16:33.675 2.679 - 2.690: 97.9036% ( 2) 00:16:33.675 2.702 - 2.714: 97.9191% ( 2) 00:16:33.675 2.714 - 2.726: 97.9346% ( 2) 00:16:33.675 2.726 - 2.738: 97.9657% ( 4) 00:16:33.675 2.738 - 2.750: 97.9890% ( 3) 00:16:33.675 2.750 - 2.761: 97.9967% ( 1) 00:16:33.675 2.761 - 2.773: 98.0123% ( 2) 00:16:33.675 2.773 - 2.785: 98.0278% ( 2) 00:16:33.675 2.785 - 2.797: 98.0589% ( 4) 00:16:33.675 2.797 - 2.809: 98.0666% ( 1) 00:16:33.675 2.809 - 2.821: 98.0744% ( 1) 00:16:33.675 2.833 - 2.844: 98.0821% ( 1) 00:16:33.675 2.844 - 2.856: 98.0899% ( 1) 00:16:33.675 2.856 - 2.868: 98.0977% ( 1) 00:16:33.675 2.880 - 2.892: 98.1054% ( 1) 00:16:33.675 2.892 - 2.904: 98.1210% ( 2) 00:16:33.675 2.904 - 2.916: 98.1287% ( 1) 00:16:33.675 2.939 - 2.951: 98.1443% ( 2) 00:16:33.675 2.951 - 2.963: 98.1598% ( 2) 00:16:33.675 2.963 - 2.975: 98.1831% ( 3) 00:16:33.675 2.999 - 3.010: 98.1909% ( 1) 00:16:33.675 3.022 - 3.034: 98.2064% ( 2) 00:16:33.675 3.058 - 3.081: 98.2141% ( 1) 00:16:33.675 3.081 - 3.105: 98.2374% ( 3) 00:16:33.675 3.105 - 3.129: 98.2452% ( 1) 00:16:33.675 3.176 - 3.200: 98.2763% ( 4) 00:16:33.675 3.200 - 3.224: 98.2840% ( 1) 00:16:33.675 3.224 - 3.247: 98.3073% ( 3) 00:16:33.675 3.247 - 3.271: 98.3151% ( 1) 00:16:33.675 3.319 - 3.342: 98.3306% ( 2) 00:16:33.675 3.342 - 3.366: 98.3384% ( 1) 00:16:33.675 3.366 - 3.390: 98.3461% ( 1) 00:16:33.675 3.390 - 3.413: 98.3617% ( 2) 00:16:33.675 3.437 - 3.461: 98.3694% ( 1) 00:16:33.675 3.484 - 3.508: 98.3772% ( 1) 00:16:33.675 3.508 - 3.532: 98.3927% ( 2) 00:16:33.675 3.532 - 3.556: 98.4238% ( 4) 00:16:33.675 3.556 - 3.579: 98.4316% ( 1) 00:16:33.675 3.579 - 3.603: 98.4393% ( 1) 00:16:33.675 3.603 - 3.627: 98.4471% ( 1) 00:16:33.675 3.627 - 3.650: 98.4548% ( 1) 00:16:33.675 3.650 - 3.674: 98.4626% ( 1) 00:16:33.675 3.674 - 3.698: 98.4704% ( 1) 00:16:33.675 3.698 - 3.721: 98.4859% ( 2) 00:16:33.675 3.745 - 3.769: 98.5092% ( 3) 00:16:33.675 3.769 - 3.793: 98.5247% ( 2) 00:16:33.675 3.793 - 3.816: 98.5403% ( 2) 00:16:33.675 3.840 - 3.864: 98.5480% ( 1) 00:16:33.675 3.887 - 3.911: 98.5558% ( 1) 00:16:33.675 3.935 - 3.959: 98.5636% ( 1) 00:16:33.675 4.006 - 4.030: 98.5868% ( 3) 00:16:33.675 4.077 - 4.101: 98.6024% ( 2) 00:16:33.675 4.361 - 4.385: 98.6101% ( 1) 00:16:33.675 4.433 - 4.456: 98.6179% ( 1) 00:16:33.675 4.456 - 4.480: 98.6257% ( 1) 00:16:33.675 4.504 - 4.527: 98.6334% ( 1) 00:16:33.675 4.812 - 4.836: 98.6412% ( 1) 00:16:33.675 5.618 - 5.641: 98.6490% ( 1) 00:16:33.675 5.997 - 6.021: 98.6567% ( 1) 00:16:33.675 6.590 - 6.637: 98.6645% ( 1) 00:16:33.675 6.732 - 6.779: 98.6800% ( 2) 00:16:33.675 6.874 - 6.921: 98.6878% ( 1) 00:16:33.675 7.064 - 7.111: 98.6956% ( 1) 00:16:33.675 7.111 - 7.159: 98.7033%[2024-10-28 15:12:20.525387] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:33.934 ( 1) 00:16:33.934 7.253 - 7.301: 98.7111% ( 1) 00:16:33.934 7.348 - 7.396: 98.7188% ( 1) 00:16:33.934 7.396 - 7.443: 98.7344% ( 2) 00:16:33.934 7.443 - 7.490: 98.7421% ( 1) 00:16:33.934 7.490 - 7.538: 98.7577% ( 2) 00:16:33.934 7.680 - 7.727: 98.7732% ( 2) 00:16:33.934 8.012 - 8.059: 98.7810% ( 1) 00:16:33.934 8.059 - 8.107: 98.7965% ( 2) 00:16:33.934 9.624 - 9.671: 98.8043% ( 1) 00:16:33.934 10.145 - 10.193: 98.8120% ( 1) 00:16:33.934 10.667 - 10.714: 98.8198% ( 1) 00:16:33.934 12.705 - 12.800: 98.8275% ( 1) 00:16:33.934 13.653 - 13.748: 98.8353% ( 1) 00:16:33.934 15.360 - 15.455: 98.8431% ( 1) 00:16:33.934 15.644 - 15.739: 98.8508% ( 1) 00:16:33.934 15.739 - 15.834: 98.8741% ( 3) 00:16:33.934 15.834 - 15.929: 98.8897% ( 2) 00:16:33.934 16.024 - 16.119: 98.9595% ( 9) 00:16:33.934 16.119 - 16.213: 98.9751% ( 2) 00:16:33.934 16.213 - 16.308: 99.0139% ( 5) 00:16:33.934 16.308 - 16.403: 99.0683% ( 7) 00:16:33.934 16.403 - 16.498: 99.1071% ( 5) 00:16:33.934 16.498 - 16.593: 99.1537% ( 6) 00:16:33.934 16.593 - 16.687: 99.1770% ( 3) 00:16:33.934 16.687 - 16.782: 99.2002% ( 3) 00:16:33.934 16.782 - 16.877: 99.3012% ( 13) 00:16:33.934 17.161 - 17.256: 99.3167% ( 2) 00:16:33.934 17.256 - 17.351: 99.3322% ( 2) 00:16:33.934 17.351 - 17.446: 99.3400% ( 1) 00:16:33.934 17.541 - 17.636: 99.3633% ( 3) 00:16:33.934 17.636 - 17.730: 99.3711% ( 1) 00:16:33.934 17.730 - 17.825: 99.3788% ( 1) 00:16:33.934 17.825 - 17.920: 99.3866% ( 1) 00:16:33.934 17.920 - 18.015: 99.3944% ( 1) 00:16:33.934 18.584 - 18.679: 99.4021% ( 1) 00:16:33.934 18.773 - 18.868: 99.4099% ( 1) 00:16:33.934 20.006 - 20.101: 99.4177% ( 1) 00:16:33.934 32.427 - 32.616: 99.4254% ( 1) 00:16:33.934 3422.436 - 3446.708: 99.4332% ( 1) 00:16:33.934 3980.705 - 4004.978: 99.8602% ( 55) 00:16:33.934 4004.978 - 4029.250: 99.9922% ( 17) 00:16:33.934 5000.154 - 5024.427: 100.0000% ( 1) 00:16:33.934 00:16:33.934 15:12:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:33.934 15:12:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:33.934 15:12:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:33.934 15:12:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:33.934 15:12:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:34.192 [ 00:16:34.192 { 00:16:34.192 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:34.192 "subtype": "Discovery", 00:16:34.192 "listen_addresses": [], 00:16:34.192 "allow_any_host": true, 00:16:34.192 "hosts": [] 00:16:34.192 }, 00:16:34.192 { 00:16:34.192 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:34.192 "subtype": "NVMe", 00:16:34.192 "listen_addresses": [ 00:16:34.192 { 00:16:34.192 "trtype": "VFIOUSER", 00:16:34.192 "adrfam": "IPv4", 00:16:34.192 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:34.192 "trsvcid": "0" 00:16:34.192 } 00:16:34.192 ], 00:16:34.192 "allow_any_host": true, 00:16:34.192 "hosts": [], 00:16:34.192 "serial_number": "SPDK1", 00:16:34.192 "model_number": "SPDK bdev Controller", 00:16:34.192 "max_namespaces": 32, 00:16:34.192 "min_cntlid": 1, 00:16:34.192 "max_cntlid": 65519, 00:16:34.192 "namespaces": [ 00:16:34.192 { 00:16:34.192 "nsid": 1, 00:16:34.192 "bdev_name": "Malloc1", 00:16:34.192 "name": "Malloc1", 00:16:34.192 "nguid": "421D06F72AAD498F978C549374B5B53B", 00:16:34.192 "uuid": "421d06f7-2aad-498f-978c-549374b5b53b" 00:16:34.192 } 00:16:34.192 ] 00:16:34.192 }, 00:16:34.192 { 00:16:34.192 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:34.192 "subtype": "NVMe", 00:16:34.192 "listen_addresses": [ 00:16:34.192 { 00:16:34.192 "trtype": "VFIOUSER", 00:16:34.192 "adrfam": "IPv4", 00:16:34.192 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:34.192 "trsvcid": "0" 00:16:34.192 } 00:16:34.192 ], 00:16:34.192 "allow_any_host": true, 00:16:34.192 "hosts": [], 00:16:34.192 "serial_number": "SPDK2", 00:16:34.192 "model_number": "SPDK bdev Controller", 00:16:34.192 "max_namespaces": 32, 00:16:34.192 "min_cntlid": 1, 00:16:34.192 "max_cntlid": 65519, 00:16:34.192 "namespaces": [ 00:16:34.192 { 00:16:34.192 "nsid": 1, 00:16:34.192 "bdev_name": "Malloc2", 00:16:34.192 "name": "Malloc2", 00:16:34.192 "nguid": "477C78A0D3F74CA48ED84B20FA876AC2", 00:16:34.192 "uuid": "477c78a0-d3f7-4ca4-8ed8-4b20fa876ac2" 00:16:34.192 } 00:16:34.192 ] 00:16:34.192 } 00:16:34.192 ] 00:16:34.192 15:12:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:34.192 15:12:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3155487 00:16:34.192 15:12:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:34.192 15:12:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:34.192 15:12:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:16:34.192 15:12:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:34.192 15:12:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:34.192 15:12:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:16:34.192 15:12:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:34.192 15:12:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:34.450 [2024-10-28 15:12:21.141346] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:34.708 Malloc3 00:16:34.708 15:12:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:35.273 [2024-10-28 15:12:21.833364] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:35.273 15:12:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:35.273 Asynchronous Event Request test 00:16:35.273 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:35.273 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:35.273 Registering asynchronous event callbacks... 00:16:35.273 Starting namespace attribute notice tests for all controllers... 00:16:35.273 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:35.273 aer_cb - Changed Namespace 00:16:35.273 Cleaning up... 00:16:35.532 [ 00:16:35.532 { 00:16:35.533 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:35.533 "subtype": "Discovery", 00:16:35.533 "listen_addresses": [], 00:16:35.533 "allow_any_host": true, 00:16:35.533 "hosts": [] 00:16:35.533 }, 00:16:35.533 { 00:16:35.533 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:35.533 "subtype": "NVMe", 00:16:35.533 "listen_addresses": [ 00:16:35.533 { 00:16:35.533 "trtype": "VFIOUSER", 00:16:35.533 "adrfam": "IPv4", 00:16:35.533 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:35.533 "trsvcid": "0" 00:16:35.533 } 00:16:35.533 ], 00:16:35.533 "allow_any_host": true, 00:16:35.533 "hosts": [], 00:16:35.533 "serial_number": "SPDK1", 00:16:35.533 "model_number": "SPDK bdev Controller", 00:16:35.533 "max_namespaces": 32, 00:16:35.533 "min_cntlid": 1, 00:16:35.533 "max_cntlid": 65519, 00:16:35.533 "namespaces": [ 00:16:35.533 { 00:16:35.533 "nsid": 1, 00:16:35.533 "bdev_name": "Malloc1", 00:16:35.533 "name": "Malloc1", 00:16:35.533 "nguid": "421D06F72AAD498F978C549374B5B53B", 00:16:35.533 "uuid": "421d06f7-2aad-498f-978c-549374b5b53b" 00:16:35.533 }, 00:16:35.533 { 00:16:35.533 "nsid": 2, 00:16:35.533 "bdev_name": "Malloc3", 00:16:35.533 "name": "Malloc3", 00:16:35.533 "nguid": "2CA2B7745D39428FAD9A6F4D7ADCC0E4", 00:16:35.533 "uuid": "2ca2b774-5d39-428f-ad9a-6f4d7adcc0e4" 00:16:35.533 } 00:16:35.533 ] 00:16:35.533 }, 00:16:35.533 { 00:16:35.533 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:35.533 "subtype": "NVMe", 00:16:35.533 "listen_addresses": [ 00:16:35.533 { 00:16:35.533 "trtype": "VFIOUSER", 00:16:35.533 "adrfam": "IPv4", 00:16:35.533 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:35.533 "trsvcid": "0" 00:16:35.533 } 00:16:35.533 ], 00:16:35.533 "allow_any_host": true, 00:16:35.533 "hosts": [], 00:16:35.533 "serial_number": "SPDK2", 00:16:35.533 "model_number": "SPDK bdev Controller", 00:16:35.533 "max_namespaces": 32, 00:16:35.533 "min_cntlid": 1, 00:16:35.533 "max_cntlid": 65519, 00:16:35.533 "namespaces": [ 00:16:35.533 { 00:16:35.533 "nsid": 1, 00:16:35.533 "bdev_name": "Malloc2", 00:16:35.533 "name": "Malloc2", 00:16:35.533 "nguid": "477C78A0D3F74CA48ED84B20FA876AC2", 00:16:35.533 "uuid": "477c78a0-d3f7-4ca4-8ed8-4b20fa876ac2" 00:16:35.533 } 00:16:35.533 ] 00:16:35.533 } 00:16:35.533 ] 00:16:35.533 15:12:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3155487 00:16:35.533 15:12:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:35.533 15:12:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:35.533 15:12:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:35.533 15:12:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:35.533 [2024-10-28 15:12:22.189721] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:16:35.533 [2024-10-28 15:12:22.189764] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3155626 ] 00:16:35.533 [2024-10-28 15:12:22.242610] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:35.533 [2024-10-28 15:12:22.245027] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:35.533 [2024-10-28 15:12:22.245056] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f4475aa5000 00:16:35.533 [2024-10-28 15:12:22.246037] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:35.533 [2024-10-28 15:12:22.247044] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:35.533 [2024-10-28 15:12:22.248047] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:35.533 [2024-10-28 15:12:22.249049] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:35.533 [2024-10-28 15:12:22.250062] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:35.533 [2024-10-28 15:12:22.251069] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:35.533 [2024-10-28 15:12:22.252078] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:35.533 [2024-10-28 15:12:22.253080] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:35.533 [2024-10-28 15:12:22.254090] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:35.533 [2024-10-28 15:12:22.254117] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f4475a9a000 00:16:35.533 [2024-10-28 15:12:22.255232] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:35.533 [2024-10-28 15:12:22.269246] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:35.533 [2024-10-28 15:12:22.269292] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:16:35.533 [2024-10-28 15:12:22.274404] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:35.533 [2024-10-28 15:12:22.274462] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:35.533 [2024-10-28 15:12:22.274558] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:16:35.533 [2024-10-28 15:12:22.274585] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:16:35.533 [2024-10-28 15:12:22.274595] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:16:35.533 [2024-10-28 15:12:22.275406] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:35.533 [2024-10-28 15:12:22.275428] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:16:35.533 [2024-10-28 15:12:22.275441] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:16:35.533 [2024-10-28 15:12:22.276412] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:35.533 [2024-10-28 15:12:22.276433] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:16:35.533 [2024-10-28 15:12:22.276447] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:35.533 [2024-10-28 15:12:22.277415] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:35.533 [2024-10-28 15:12:22.277436] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:35.534 [2024-10-28 15:12:22.278418] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:35.534 [2024-10-28 15:12:22.278440] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:35.534 [2024-10-28 15:12:22.278450] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:35.534 [2024-10-28 15:12:22.278461] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:35.534 [2024-10-28 15:12:22.278572] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:16:35.534 [2024-10-28 15:12:22.278581] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:35.534 [2024-10-28 15:12:22.278589] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:35.534 [2024-10-28 15:12:22.279431] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:35.534 [2024-10-28 15:12:22.280437] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:35.534 [2024-10-28 15:12:22.281445] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:35.534 [2024-10-28 15:12:22.282440] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:35.534 [2024-10-28 15:12:22.282511] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:35.534 [2024-10-28 15:12:22.283450] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:35.534 [2024-10-28 15:12:22.283471] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:35.534 [2024-10-28 15:12:22.283480] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:35.534 [2024-10-28 15:12:22.283504] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:16:35.534 [2024-10-28 15:12:22.283518] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:35.534 [2024-10-28 15:12:22.283541] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:35.534 [2024-10-28 15:12:22.283550] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:35.534 [2024-10-28 15:12:22.283557] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:35.534 [2024-10-28 15:12:22.283578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:35.534 [2024-10-28 15:12:22.291669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:35.534 [2024-10-28 15:12:22.291695] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:16:35.534 [2024-10-28 15:12:22.291705] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:16:35.534 [2024-10-28 15:12:22.291712] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:16:35.534 [2024-10-28 15:12:22.291720] nvme_ctrlr.c:2072:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:35.534 [2024-10-28 15:12:22.291729] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:16:35.534 [2024-10-28 15:12:22.291737] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:16:35.534 [2024-10-28 15:12:22.291745] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:16:35.534 [2024-10-28 15:12:22.291759] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:35.534 [2024-10-28 15:12:22.291776] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:35.534 [2024-10-28 15:12:22.299662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:35.534 [2024-10-28 15:12:22.299693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.534 [2024-10-28 15:12:22.299708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.534 [2024-10-28 15:12:22.299720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.534 [2024-10-28 15:12:22.299735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.534 [2024-10-28 15:12:22.299744] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:35.534 [2024-10-28 15:12:22.299756] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:35.534 [2024-10-28 15:12:22.299770] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:35.534 [2024-10-28 15:12:22.307659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:35.534 [2024-10-28 15:12:22.307683] nvme_ctrlr.c:3011:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:16:35.534 [2024-10-28 15:12:22.307693] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:35.534 [2024-10-28 15:12:22.307706] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:16:35.534 [2024-10-28 15:12:22.307717] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:35.534 [2024-10-28 15:12:22.307731] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:35.534 [2024-10-28 15:12:22.315674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:35.534 [2024-10-28 15:12:22.315755] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:16:35.534 [2024-10-28 15:12:22.315774] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:35.534 [2024-10-28 15:12:22.315789] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:35.534 [2024-10-28 15:12:22.315797] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:35.534 [2024-10-28 15:12:22.315803] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:35.534 [2024-10-28 15:12:22.315813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:35.534 [2024-10-28 15:12:22.323662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:35.534 [2024-10-28 15:12:22.323687] nvme_ctrlr.c:4699:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:16:35.534 [2024-10-28 15:12:22.323710] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:16:35.534 [2024-10-28 15:12:22.323726] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:35.534 [2024-10-28 15:12:22.323739] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:35.534 [2024-10-28 15:12:22.323747] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:35.534 [2024-10-28 15:12:22.323753] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:35.534 [2024-10-28 15:12:22.323763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:35.534 [2024-10-28 15:12:22.331662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:35.534 [2024-10-28 15:12:22.331693] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:35.534 [2024-10-28 15:12:22.331710] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:35.534 [2024-10-28 15:12:22.331724] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:35.534 [2024-10-28 15:12:22.331732] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:35.534 [2024-10-28 15:12:22.331738] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:35.535 [2024-10-28 15:12:22.331748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:35.535 [2024-10-28 15:12:22.339664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:35.535 [2024-10-28 15:12:22.339688] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:35.535 [2024-10-28 15:12:22.339701] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:35.535 [2024-10-28 15:12:22.339717] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:16:35.535 [2024-10-28 15:12:22.339729] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:35.535 [2024-10-28 15:12:22.339738] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:35.535 [2024-10-28 15:12:22.339747] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:16:35.535 [2024-10-28 15:12:22.339757] nvme_ctrlr.c:3111:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:35.535 [2024-10-28 15:12:22.339765] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:16:35.535 [2024-10-28 15:12:22.339773] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:16:35.535 [2024-10-28 15:12:22.339800] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:35.535 [2024-10-28 15:12:22.347665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:35.535 [2024-10-28 15:12:22.347692] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:35.535 [2024-10-28 15:12:22.355663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:35.535 [2024-10-28 15:12:22.355688] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:35.535 [2024-10-28 15:12:22.363663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:35.535 [2024-10-28 15:12:22.363688] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:35.535 [2024-10-28 15:12:22.371679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:35.535 [2024-10-28 15:12:22.371715] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:35.535 [2024-10-28 15:12:22.371727] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:35.535 [2024-10-28 15:12:22.371734] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:35.535 [2024-10-28 15:12:22.371740] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:35.535 [2024-10-28 15:12:22.371746] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:35.535 [2024-10-28 15:12:22.371756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:35.535 [2024-10-28 15:12:22.371768] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:35.535 [2024-10-28 15:12:22.371776] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:35.535 [2024-10-28 15:12:22.371783] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:35.535 [2024-10-28 15:12:22.371792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:35.535 [2024-10-28 15:12:22.371803] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:35.535 [2024-10-28 15:12:22.371811] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:35.535 [2024-10-28 15:12:22.371817] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:35.535 [2024-10-28 15:12:22.371826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:35.535 [2024-10-28 15:12:22.371843] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:35.535 [2024-10-28 15:12:22.371852] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:35.535 [2024-10-28 15:12:22.371858] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:35.535 [2024-10-28 15:12:22.371867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:35.535 [2024-10-28 15:12:22.379663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:35.535 [2024-10-28 15:12:22.379691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:35.535 [2024-10-28 15:12:22.379709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:35.535 [2024-10-28 15:12:22.379722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:35.535 ===================================================== 00:16:35.535 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:35.535 ===================================================== 00:16:35.535 Controller Capabilities/Features 00:16:35.535 ================================ 00:16:35.535 Vendor ID: 4e58 00:16:35.535 Subsystem Vendor ID: 4e58 00:16:35.535 Serial Number: SPDK2 00:16:35.535 Model Number: SPDK bdev Controller 00:16:35.535 Firmware Version: 25.01 00:16:35.535 Recommended Arb Burst: 6 00:16:35.535 IEEE OUI Identifier: 8d 6b 50 00:16:35.535 Multi-path I/O 00:16:35.535 May have multiple subsystem ports: Yes 00:16:35.535 May have multiple controllers: Yes 00:16:35.535 Associated with SR-IOV VF: No 00:16:35.535 Max Data Transfer Size: 131072 00:16:35.535 Max Number of Namespaces: 32 00:16:35.535 Max Number of I/O Queues: 127 00:16:35.535 NVMe Specification Version (VS): 1.3 00:16:35.535 NVMe Specification Version (Identify): 1.3 00:16:35.535 Maximum Queue Entries: 256 00:16:35.535 Contiguous Queues Required: Yes 00:16:35.535 Arbitration Mechanisms Supported 00:16:35.535 Weighted Round Robin: Not Supported 00:16:35.535 Vendor Specific: Not Supported 00:16:35.535 Reset Timeout: 15000 ms 00:16:35.535 Doorbell Stride: 4 bytes 00:16:35.535 NVM Subsystem Reset: Not Supported 00:16:35.535 Command Sets Supported 00:16:35.535 NVM Command Set: Supported 00:16:35.535 Boot Partition: Not Supported 00:16:35.535 Memory Page Size Minimum: 4096 bytes 00:16:35.535 Memory Page Size Maximum: 4096 bytes 00:16:35.535 Persistent Memory Region: Not Supported 00:16:35.535 Optional Asynchronous Events Supported 00:16:35.535 Namespace Attribute Notices: Supported 00:16:35.535 Firmware Activation Notices: Not Supported 00:16:35.535 ANA Change Notices: Not Supported 00:16:35.535 PLE Aggregate Log Change Notices: Not Supported 00:16:35.535 LBA Status Info Alert Notices: Not Supported 00:16:35.535 EGE Aggregate Log Change Notices: Not Supported 00:16:35.535 Normal NVM Subsystem Shutdown event: Not Supported 00:16:35.535 Zone Descriptor Change Notices: Not Supported 00:16:35.535 Discovery Log Change Notices: Not Supported 00:16:35.535 Controller Attributes 00:16:35.535 128-bit Host Identifier: Supported 00:16:35.535 Non-Operational Permissive Mode: Not Supported 00:16:35.535 NVM Sets: Not Supported 00:16:35.535 Read Recovery Levels: Not Supported 00:16:35.535 Endurance Groups: Not Supported 00:16:35.535 Predictable Latency Mode: Not Supported 00:16:35.535 Traffic Based Keep ALive: Not Supported 00:16:35.535 Namespace Granularity: Not Supported 00:16:35.536 SQ Associations: Not Supported 00:16:35.536 UUID List: Not Supported 00:16:35.536 Multi-Domain Subsystem: Not Supported 00:16:35.536 Fixed Capacity Management: Not Supported 00:16:35.536 Variable Capacity Management: Not Supported 00:16:35.536 Delete Endurance Group: Not Supported 00:16:35.536 Delete NVM Set: Not Supported 00:16:35.536 Extended LBA Formats Supported: Not Supported 00:16:35.536 Flexible Data Placement Supported: Not Supported 00:16:35.536 00:16:35.536 Controller Memory Buffer Support 00:16:35.536 ================================ 00:16:35.536 Supported: No 00:16:35.536 00:16:35.536 Persistent Memory Region Support 00:16:35.536 ================================ 00:16:35.536 Supported: No 00:16:35.536 00:16:35.536 Admin Command Set Attributes 00:16:35.536 ============================ 00:16:35.536 Security Send/Receive: Not Supported 00:16:35.536 Format NVM: Not Supported 00:16:35.536 Firmware Activate/Download: Not Supported 00:16:35.536 Namespace Management: Not Supported 00:16:35.536 Device Self-Test: Not Supported 00:16:35.536 Directives: Not Supported 00:16:35.536 NVMe-MI: Not Supported 00:16:35.536 Virtualization Management: Not Supported 00:16:35.536 Doorbell Buffer Config: Not Supported 00:16:35.536 Get LBA Status Capability: Not Supported 00:16:35.536 Command & Feature Lockdown Capability: Not Supported 00:16:35.536 Abort Command Limit: 4 00:16:35.536 Async Event Request Limit: 4 00:16:35.536 Number of Firmware Slots: N/A 00:16:35.536 Firmware Slot 1 Read-Only: N/A 00:16:35.536 Firmware Activation Without Reset: N/A 00:16:35.536 Multiple Update Detection Support: N/A 00:16:35.536 Firmware Update Granularity: No Information Provided 00:16:35.536 Per-Namespace SMART Log: No 00:16:35.536 Asymmetric Namespace Access Log Page: Not Supported 00:16:35.536 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:35.536 Command Effects Log Page: Supported 00:16:35.536 Get Log Page Extended Data: Supported 00:16:35.536 Telemetry Log Pages: Not Supported 00:16:35.536 Persistent Event Log Pages: Not Supported 00:16:35.536 Supported Log Pages Log Page: May Support 00:16:35.536 Commands Supported & Effects Log Page: Not Supported 00:16:35.536 Feature Identifiers & Effects Log Page:May Support 00:16:35.536 NVMe-MI Commands & Effects Log Page: May Support 00:16:35.536 Data Area 4 for Telemetry Log: Not Supported 00:16:35.536 Error Log Page Entries Supported: 128 00:16:35.536 Keep Alive: Supported 00:16:35.536 Keep Alive Granularity: 10000 ms 00:16:35.536 00:16:35.536 NVM Command Set Attributes 00:16:35.536 ========================== 00:16:35.536 Submission Queue Entry Size 00:16:35.536 Max: 64 00:16:35.536 Min: 64 00:16:35.536 Completion Queue Entry Size 00:16:35.536 Max: 16 00:16:35.536 Min: 16 00:16:35.536 Number of Namespaces: 32 00:16:35.536 Compare Command: Supported 00:16:35.536 Write Uncorrectable Command: Not Supported 00:16:35.536 Dataset Management Command: Supported 00:16:35.536 Write Zeroes Command: Supported 00:16:35.536 Set Features Save Field: Not Supported 00:16:35.536 Reservations: Not Supported 00:16:35.536 Timestamp: Not Supported 00:16:35.536 Copy: Supported 00:16:35.536 Volatile Write Cache: Present 00:16:35.536 Atomic Write Unit (Normal): 1 00:16:35.536 Atomic Write Unit (PFail): 1 00:16:35.536 Atomic Compare & Write Unit: 1 00:16:35.536 Fused Compare & Write: Supported 00:16:35.536 Scatter-Gather List 00:16:35.536 SGL Command Set: Supported (Dword aligned) 00:16:35.536 SGL Keyed: Not Supported 00:16:35.536 SGL Bit Bucket Descriptor: Not Supported 00:16:35.536 SGL Metadata Pointer: Not Supported 00:16:35.536 Oversized SGL: Not Supported 00:16:35.536 SGL Metadata Address: Not Supported 00:16:35.536 SGL Offset: Not Supported 00:16:35.536 Transport SGL Data Block: Not Supported 00:16:35.536 Replay Protected Memory Block: Not Supported 00:16:35.536 00:16:35.536 Firmware Slot Information 00:16:35.536 ========================= 00:16:35.536 Active slot: 1 00:16:35.536 Slot 1 Firmware Revision: 25.01 00:16:35.536 00:16:35.536 00:16:35.536 Commands Supported and Effects 00:16:35.536 ============================== 00:16:35.536 Admin Commands 00:16:35.536 -------------- 00:16:35.536 Get Log Page (02h): Supported 00:16:35.536 Identify (06h): Supported 00:16:35.536 Abort (08h): Supported 00:16:35.536 Set Features (09h): Supported 00:16:35.536 Get Features (0Ah): Supported 00:16:35.536 Asynchronous Event Request (0Ch): Supported 00:16:35.536 Keep Alive (18h): Supported 00:16:35.536 I/O Commands 00:16:35.536 ------------ 00:16:35.536 Flush (00h): Supported LBA-Change 00:16:35.536 Write (01h): Supported LBA-Change 00:16:35.536 Read (02h): Supported 00:16:35.536 Compare (05h): Supported 00:16:35.536 Write Zeroes (08h): Supported LBA-Change 00:16:35.536 Dataset Management (09h): Supported LBA-Change 00:16:35.536 Copy (19h): Supported LBA-Change 00:16:35.536 00:16:35.536 Error Log 00:16:35.536 ========= 00:16:35.536 00:16:35.536 Arbitration 00:16:35.536 =========== 00:16:35.536 Arbitration Burst: 1 00:16:35.536 00:16:35.536 Power Management 00:16:35.536 ================ 00:16:35.536 Number of Power States: 1 00:16:35.536 Current Power State: Power State #0 00:16:35.536 Power State #0: 00:16:35.536 Max Power: 0.00 W 00:16:35.536 Non-Operational State: Operational 00:16:35.536 Entry Latency: Not Reported 00:16:35.536 Exit Latency: Not Reported 00:16:35.536 Relative Read Throughput: 0 00:16:35.536 Relative Read Latency: 0 00:16:35.536 Relative Write Throughput: 0 00:16:35.536 Relative Write Latency: 0 00:16:35.536 Idle Power: Not Reported 00:16:35.536 Active Power: Not Reported 00:16:35.536 Non-Operational Permissive Mode: Not Supported 00:16:35.536 00:16:35.536 Health Information 00:16:35.536 ================== 00:16:35.537 Critical Warnings: 00:16:35.537 Available Spare Space: OK 00:16:35.537 Temperature: OK 00:16:35.537 Device Reliability: OK 00:16:35.537 Read Only: No 00:16:35.537 Volatile Memory Backup: OK 00:16:35.537 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:35.537 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:35.537 Available Spare: 0% 00:16:35.537 Available Sp[2024-10-28 15:12:22.379839] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:35.537 [2024-10-28 15:12:22.387676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:35.537 [2024-10-28 15:12:22.387729] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:16:35.537 [2024-10-28 15:12:22.387748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.537 [2024-10-28 15:12:22.387759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.537 [2024-10-28 15:12:22.387770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.537 [2024-10-28 15:12:22.387779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.537 [2024-10-28 15:12:22.387873] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:35.537 [2024-10-28 15:12:22.387896] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:35.537 [2024-10-28 15:12:22.388877] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:35.537 [2024-10-28 15:12:22.388965] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:16:35.537 [2024-10-28 15:12:22.388980] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:16:35.537 [2024-10-28 15:12:22.389886] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:35.537 [2024-10-28 15:12:22.389911] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:16:35.537 [2024-10-28 15:12:22.389979] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:35.537 [2024-10-28 15:12:22.391161] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:35.795 are Threshold: 0% 00:16:35.795 Life Percentage Used: 0% 00:16:35.795 Data Units Read: 0 00:16:35.795 Data Units Written: 0 00:16:35.795 Host Read Commands: 0 00:16:35.795 Host Write Commands: 0 00:16:35.795 Controller Busy Time: 0 minutes 00:16:35.795 Power Cycles: 0 00:16:35.795 Power On Hours: 0 hours 00:16:35.795 Unsafe Shutdowns: 0 00:16:35.795 Unrecoverable Media Errors: 0 00:16:35.795 Lifetime Error Log Entries: 0 00:16:35.795 Warning Temperature Time: 0 minutes 00:16:35.795 Critical Temperature Time: 0 minutes 00:16:35.795 00:16:35.795 Number of Queues 00:16:35.795 ================ 00:16:35.795 Number of I/O Submission Queues: 127 00:16:35.795 Number of I/O Completion Queues: 127 00:16:35.795 00:16:35.795 Active Namespaces 00:16:35.795 ================= 00:16:35.795 Namespace ID:1 00:16:35.795 Error Recovery Timeout: Unlimited 00:16:35.795 Command Set Identifier: NVM (00h) 00:16:35.795 Deallocate: Supported 00:16:35.795 Deallocated/Unwritten Error: Not Supported 00:16:35.795 Deallocated Read Value: Unknown 00:16:35.795 Deallocate in Write Zeroes: Not Supported 00:16:35.795 Deallocated Guard Field: 0xFFFF 00:16:35.795 Flush: Supported 00:16:35.795 Reservation: Supported 00:16:35.795 Namespace Sharing Capabilities: Multiple Controllers 00:16:35.795 Size (in LBAs): 131072 (0GiB) 00:16:35.795 Capacity (in LBAs): 131072 (0GiB) 00:16:35.795 Utilization (in LBAs): 131072 (0GiB) 00:16:35.795 NGUID: 477C78A0D3F74CA48ED84B20FA876AC2 00:16:35.795 UUID: 477c78a0-d3f7-4ca4-8ed8-4b20fa876ac2 00:16:35.795 Thin Provisioning: Not Supported 00:16:35.795 Per-NS Atomic Units: Yes 00:16:35.795 Atomic Boundary Size (Normal): 0 00:16:35.795 Atomic Boundary Size (PFail): 0 00:16:35.795 Atomic Boundary Offset: 0 00:16:35.795 Maximum Single Source Range Length: 65535 00:16:35.795 Maximum Copy Length: 65535 00:16:35.795 Maximum Source Range Count: 1 00:16:35.795 NGUID/EUI64 Never Reused: No 00:16:35.795 Namespace Write Protected: No 00:16:35.795 Number of LBA Formats: 1 00:16:35.795 Current LBA Format: LBA Format #00 00:16:35.795 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:35.795 00:16:35.795 15:12:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:36.054 [2024-10-28 15:12:22.693682] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:41.318 Initializing NVMe Controllers 00:16:41.318 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:41.318 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:41.318 Initialization complete. Launching workers. 00:16:41.318 ======================================================== 00:16:41.318 Latency(us) 00:16:41.318 Device Information : IOPS MiB/s Average min max 00:16:41.318 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33924.40 132.52 3774.45 1186.98 10264.34 00:16:41.318 ======================================================== 00:16:41.318 Total : 33924.40 132.52 3774.45 1186.98 10264.34 00:16:41.318 00:16:41.318 [2024-10-28 15:12:27.803033] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:41.318 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:41.318 [2024-10-28 15:12:28.063746] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:46.586 Initializing NVMe Controllers 00:16:46.586 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:46.586 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:46.586 Initialization complete. Launching workers. 00:16:46.586 ======================================================== 00:16:46.586 Latency(us) 00:16:46.586 Device Information : IOPS MiB/s Average min max 00:16:46.586 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31217.67 121.94 4099.51 1202.94 8985.99 00:16:46.586 ======================================================== 00:16:46.586 Total : 31217.67 121.94 4099.51 1202.94 8985.99 00:16:46.586 00:16:46.586 [2024-10-28 15:12:33.086242] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:46.586 15:12:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:46.586 [2024-10-28 15:12:33.325358] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:51.893 [2024-10-28 15:12:38.481808] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:51.893 Initializing NVMe Controllers 00:16:51.893 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:51.893 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:51.893 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:51.893 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:51.893 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:51.893 Initialization complete. Launching workers. 00:16:51.893 Starting thread on core 2 00:16:51.893 Starting thread on core 3 00:16:51.893 Starting thread on core 1 00:16:51.893 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:52.152 [2024-10-28 15:12:38.848720] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:55.432 [2024-10-28 15:12:41.917959] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:55.432 Initializing NVMe Controllers 00:16:55.432 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:55.432 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:55.433 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:55.433 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:55.433 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:55.433 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:55.433 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:55.433 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:55.433 Initialization complete. Launching workers. 00:16:55.433 Starting thread on core 1 with urgent priority queue 00:16:55.433 Starting thread on core 2 with urgent priority queue 00:16:55.433 Starting thread on core 3 with urgent priority queue 00:16:55.433 Starting thread on core 0 with urgent priority queue 00:16:55.433 SPDK bdev Controller (SPDK2 ) core 0: 2650.67 IO/s 37.73 secs/100000 ios 00:16:55.433 SPDK bdev Controller (SPDK2 ) core 1: 4804.67 IO/s 20.81 secs/100000 ios 00:16:55.433 SPDK bdev Controller (SPDK2 ) core 2: 4788.00 IO/s 20.89 secs/100000 ios 00:16:55.433 SPDK bdev Controller (SPDK2 ) core 3: 5128.33 IO/s 19.50 secs/100000 ios 00:16:55.433 ======================================================== 00:16:55.433 00:16:55.433 15:12:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:55.690 [2024-10-28 15:12:42.319191] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:55.690 Initializing NVMe Controllers 00:16:55.690 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:55.690 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:55.690 Namespace ID: 1 size: 0GB 00:16:55.690 Initialization complete. 00:16:55.690 INFO: using host memory buffer for IO 00:16:55.690 Hello world! 00:16:55.690 [2024-10-28 15:12:42.330262] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:55.690 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:55.948 [2024-10-28 15:12:42.663400] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:57.321 Initializing NVMe Controllers 00:16:57.321 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:57.321 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:57.321 Initialization complete. Launching workers. 00:16:57.321 submit (in ns) avg, min, max = 8204.6, 3508.9, 4018305.6 00:16:57.321 complete (in ns) avg, min, max = 26843.5, 2062.2, 6002514.4 00:16:57.321 00:16:57.321 Submit histogram 00:16:57.321 ================ 00:16:57.321 Range in us Cumulative Count 00:16:57.321 3.508 - 3.532: 0.0228% ( 3) 00:16:57.321 3.532 - 3.556: 0.2886% ( 35) 00:16:57.321 3.556 - 3.579: 1.4278% ( 150) 00:16:57.321 3.579 - 3.603: 5.2176% ( 499) 00:16:57.321 3.603 - 3.627: 10.0327% ( 634) 00:16:57.321 3.627 - 3.650: 20.0881% ( 1324) 00:16:57.321 3.650 - 3.674: 29.8246% ( 1282) 00:16:57.321 3.674 - 3.698: 39.8876% ( 1325) 00:16:57.321 3.698 - 3.721: 48.7734% ( 1170) 00:16:57.321 3.721 - 3.745: 54.3176% ( 730) 00:16:57.321 3.745 - 3.769: 59.6491% ( 702) 00:16:57.321 3.769 - 3.793: 63.6288% ( 524) 00:16:57.321 3.793 - 3.816: 68.0945% ( 588) 00:16:57.321 3.816 - 3.840: 71.5881% ( 460) 00:16:57.321 3.840 - 3.864: 74.7779% ( 420) 00:16:57.321 3.864 - 3.887: 78.2411% ( 456) 00:16:57.321 3.887 - 3.911: 81.6283% ( 446) 00:16:57.321 3.911 - 3.935: 84.8789% ( 428) 00:16:57.321 3.935 - 3.959: 87.0434% ( 285) 00:16:57.321 3.959 - 3.982: 88.8737% ( 241) 00:16:57.321 3.982 - 4.006: 90.4914% ( 213) 00:16:57.321 4.006 - 4.030: 91.9799% ( 196) 00:16:57.321 4.030 - 4.053: 93.2483% ( 167) 00:16:57.321 4.053 - 4.077: 94.2584% ( 133) 00:16:57.321 4.077 - 4.101: 94.9571% ( 92) 00:16:57.321 4.101 - 4.124: 95.4356% ( 63) 00:16:57.321 4.124 - 4.148: 95.7014% ( 35) 00:16:57.321 4.148 - 4.172: 95.9824% ( 37) 00:16:57.321 4.172 - 4.196: 96.1115% ( 17) 00:16:57.321 4.196 - 4.219: 96.2406% ( 17) 00:16:57.321 4.219 - 4.243: 96.3165% ( 10) 00:16:57.321 4.243 - 4.267: 96.4457% ( 17) 00:16:57.321 4.267 - 4.290: 96.5140% ( 9) 00:16:57.321 4.290 - 4.314: 96.6051% ( 12) 00:16:57.321 4.314 - 4.338: 96.7039% ( 13) 00:16:57.321 4.338 - 4.361: 96.7419% ( 5) 00:16:57.321 4.361 - 4.385: 96.8178% ( 10) 00:16:57.321 4.385 - 4.409: 96.8710% ( 7) 00:16:57.321 4.409 - 4.433: 96.8862% ( 2) 00:16:57.321 4.433 - 4.456: 96.9317% ( 6) 00:16:57.321 4.456 - 4.480: 96.9621% ( 4) 00:16:57.321 4.480 - 4.504: 96.9849% ( 3) 00:16:57.321 4.504 - 4.527: 97.0001% ( 2) 00:16:57.321 4.527 - 4.551: 97.0077% ( 1) 00:16:57.321 4.575 - 4.599: 97.0153% ( 1) 00:16:57.321 4.599 - 4.622: 97.0305% ( 2) 00:16:57.321 4.622 - 4.646: 97.0532% ( 3) 00:16:57.321 4.646 - 4.670: 97.0608% ( 1) 00:16:57.321 4.670 - 4.693: 97.0684% ( 1) 00:16:57.321 4.693 - 4.717: 97.0912% ( 3) 00:16:57.321 4.717 - 4.741: 97.1444% ( 7) 00:16:57.321 4.741 - 4.764: 97.1672% ( 3) 00:16:57.321 4.764 - 4.788: 97.2051% ( 5) 00:16:57.321 4.788 - 4.812: 97.2355% ( 4) 00:16:57.321 4.812 - 4.836: 97.3039% ( 9) 00:16:57.321 4.836 - 4.859: 97.3570% ( 7) 00:16:57.321 4.859 - 4.883: 97.3950% ( 5) 00:16:57.321 4.883 - 4.907: 97.4634% ( 9) 00:16:57.321 4.907 - 4.930: 97.5165% ( 7) 00:16:57.321 4.930 - 4.954: 97.6304% ( 15) 00:16:57.321 4.954 - 4.978: 97.6988% ( 9) 00:16:57.321 4.978 - 5.001: 97.7292% ( 4) 00:16:57.321 5.001 - 5.025: 97.7747% ( 6) 00:16:57.321 5.025 - 5.049: 97.8583% ( 11) 00:16:57.321 5.049 - 5.073: 97.8887% ( 4) 00:16:57.321 5.073 - 5.096: 97.9266% ( 5) 00:16:57.321 5.096 - 5.120: 97.9646% ( 5) 00:16:57.321 5.120 - 5.144: 97.9950% ( 4) 00:16:57.321 5.144 - 5.167: 98.0102% ( 2) 00:16:57.321 5.167 - 5.191: 98.0406% ( 4) 00:16:57.321 5.191 - 5.215: 98.0709% ( 4) 00:16:57.321 5.215 - 5.239: 98.0785% ( 1) 00:16:57.321 5.239 - 5.262: 98.0861% ( 1) 00:16:57.321 5.286 - 5.310: 98.1013% ( 2) 00:16:57.321 5.310 - 5.333: 98.1165% ( 2) 00:16:57.321 5.333 - 5.357: 98.1317% ( 2) 00:16:57.321 5.357 - 5.381: 98.1621% ( 4) 00:16:57.321 5.381 - 5.404: 98.1773% ( 2) 00:16:57.321 5.428 - 5.452: 98.1849% ( 1) 00:16:57.321 5.452 - 5.476: 98.2000% ( 2) 00:16:57.321 5.499 - 5.523: 98.2076% ( 1) 00:16:57.321 5.523 - 5.547: 98.2152% ( 1) 00:16:57.321 5.570 - 5.594: 98.2228% ( 1) 00:16:57.321 5.594 - 5.618: 98.2380% ( 2) 00:16:57.321 5.641 - 5.665: 98.2456% ( 1) 00:16:57.321 5.713 - 5.736: 98.2532% ( 1) 00:16:57.321 5.926 - 5.950: 98.2608% ( 1) 00:16:57.321 6.044 - 6.068: 98.2684% ( 1) 00:16:57.321 6.116 - 6.163: 98.2760% ( 1) 00:16:57.321 6.258 - 6.305: 98.2836% ( 1) 00:16:57.321 6.590 - 6.637: 98.2912% ( 1) 00:16:57.321 6.732 - 6.779: 98.3064% ( 2) 00:16:57.321 6.779 - 6.827: 98.3140% ( 1) 00:16:57.321 7.159 - 7.206: 98.3292% ( 2) 00:16:57.321 7.253 - 7.301: 98.3443% ( 2) 00:16:57.321 7.301 - 7.348: 98.3671% ( 3) 00:16:57.321 7.396 - 7.443: 98.3747% ( 1) 00:16:57.321 7.443 - 7.490: 98.3823% ( 1) 00:16:57.321 7.490 - 7.538: 98.3975% ( 2) 00:16:57.321 7.538 - 7.585: 98.4051% ( 1) 00:16:57.321 7.585 - 7.633: 98.4127% ( 1) 00:16:57.321 7.633 - 7.680: 98.4203% ( 1) 00:16:57.321 7.680 - 7.727: 98.4279% ( 1) 00:16:57.321 7.727 - 7.775: 98.4355% ( 1) 00:16:57.321 7.822 - 7.870: 98.4431% ( 1) 00:16:57.321 7.870 - 7.917: 98.4507% ( 1) 00:16:57.321 7.917 - 7.964: 98.4583% ( 1) 00:16:57.321 7.964 - 8.012: 98.4735% ( 2) 00:16:57.321 8.012 - 8.059: 98.4811% ( 1) 00:16:57.321 8.107 - 8.154: 98.4962% ( 2) 00:16:57.321 8.201 - 8.249: 98.5038% ( 1) 00:16:57.321 8.249 - 8.296: 98.5190% ( 2) 00:16:57.321 8.296 - 8.344: 98.5266% ( 1) 00:16:57.321 8.581 - 8.628: 98.5418% ( 2) 00:16:57.321 8.676 - 8.723: 98.5646% ( 3) 00:16:57.321 8.723 - 8.770: 98.5722% ( 1) 00:16:57.321 8.770 - 8.818: 98.5798% ( 1) 00:16:57.321 8.865 - 8.913: 98.5950% ( 2) 00:16:57.321 8.913 - 8.960: 98.6026% ( 1) 00:16:57.321 9.102 - 9.150: 98.6178% ( 2) 00:16:57.321 9.197 - 9.244: 98.6254% ( 1) 00:16:57.321 9.292 - 9.339: 98.6329% ( 1) 00:16:57.321 9.387 - 9.434: 98.6405% ( 1) 00:16:57.321 9.481 - 9.529: 98.6633% ( 3) 00:16:57.321 9.576 - 9.624: 98.6785% ( 2) 00:16:57.321 9.766 - 9.813: 98.6861% ( 1) 00:16:57.321 10.050 - 10.098: 98.7089% ( 3) 00:16:57.321 10.145 - 10.193: 98.7241% ( 2) 00:16:57.321 10.193 - 10.240: 98.7393% ( 2) 00:16:57.321 10.335 - 10.382: 98.7469% ( 1) 00:16:57.321 10.430 - 10.477: 98.7545% ( 1) 00:16:57.321 10.572 - 10.619: 98.7621% ( 1) 00:16:57.321 10.761 - 10.809: 98.7697% ( 1) 00:16:57.321 10.999 - 11.046: 98.7848% ( 2) 00:16:57.321 11.188 - 11.236: 98.7924% ( 1) 00:16:57.321 11.378 - 11.425: 98.8000% ( 1) 00:16:57.321 11.520 - 11.567: 98.8076% ( 1) 00:16:57.321 11.567 - 11.615: 98.8152% ( 1) 00:16:57.321 11.662 - 11.710: 98.8228% ( 1) 00:16:57.321 11.947 - 11.994: 98.8304% ( 1) 00:16:57.321 12.136 - 12.231: 98.8456% ( 2) 00:16:57.321 12.231 - 12.326: 98.8532% ( 1) 00:16:57.321 12.326 - 12.421: 98.8684% ( 2) 00:16:57.321 12.516 - 12.610: 98.8760% ( 1) 00:16:57.321 12.705 - 12.800: 98.8836% ( 1) 00:16:57.321 12.800 - 12.895: 98.8912% ( 1) 00:16:57.321 12.990 - 13.084: 98.9064% ( 2) 00:16:57.321 13.084 - 13.179: 98.9215% ( 2) 00:16:57.321 13.274 - 13.369: 98.9291% ( 1) 00:16:57.321 13.369 - 13.464: 98.9443% ( 2) 00:16:57.321 13.559 - 13.653: 98.9595% ( 2) 00:16:57.321 13.653 - 13.748: 98.9747% ( 2) 00:16:57.321 13.938 - 14.033: 98.9975% ( 3) 00:16:57.321 14.033 - 14.127: 99.0127% ( 2) 00:16:57.321 14.507 - 14.601: 99.0203% ( 1) 00:16:57.321 14.601 - 14.696: 99.0279% ( 1) 00:16:57.321 14.696 - 14.791: 99.0431% ( 2) 00:16:57.321 14.791 - 14.886: 99.0507% ( 1) 00:16:57.321 16.213 - 16.308: 99.0583% ( 1) 00:16:57.321 16.403 - 16.498: 99.0658% ( 1) 00:16:57.321 17.161 - 17.256: 99.0734% ( 1) 00:16:57.321 17.256 - 17.351: 99.0810% ( 1) 00:16:57.321 17.351 - 17.446: 99.1266% ( 6) 00:16:57.321 17.446 - 17.541: 99.1646% ( 5) 00:16:57.321 17.541 - 17.636: 99.1950% ( 4) 00:16:57.321 17.636 - 17.730: 99.2329% ( 5) 00:16:57.321 17.730 - 17.825: 99.2633% ( 4) 00:16:57.321 17.825 - 17.920: 99.3317% ( 9) 00:16:57.321 17.920 - 18.015: 99.4228% ( 12) 00:16:57.321 18.015 - 18.110: 99.4760% ( 7) 00:16:57.321 18.110 - 18.204: 99.5291% ( 7) 00:16:57.321 18.204 - 18.299: 99.5823% ( 7) 00:16:57.321 18.299 - 18.394: 99.6658% ( 11) 00:16:57.321 18.394 - 18.489: 99.7038% ( 5) 00:16:57.321 18.489 - 18.584: 99.7646% ( 8) 00:16:57.321 18.584 - 18.679: 99.8025% ( 5) 00:16:57.321 18.868 - 18.963: 99.8177% ( 2) 00:16:57.321 18.963 - 19.058: 99.8405% ( 3) 00:16:57.321 19.058 - 19.153: 99.8481% ( 1) 00:16:57.321 19.247 - 19.342: 99.8557% ( 1) 00:16:57.321 19.816 - 19.911: 99.8633% ( 1) 00:16:57.321 21.523 - 21.618: 99.8709% ( 1) 00:16:57.322 23.230 - 23.324: 99.8785% ( 1) 00:16:57.322 24.652 - 24.841: 99.8861% ( 1) 00:16:57.322 32.237 - 32.427: 99.8937% ( 1) 00:16:57.322 3980.705 - 4004.978: 99.9924% ( 13) 00:16:57.322 4004.978 - 4029.250: 100.0000% ( 1) 00:16:57.322 00:16:57.322 Complete histogram 00:16:57.322 ================== 00:16:57.322 Range in us Cumulative Count 00:16:57.322 2.062 - 2.074: 4.9746% ( 655) 00:16:57.322 2.074 - 2.086: 29.9385% ( 3287) 00:16:57.322 2.086 - 2.098: 32.9384% ( 395) 00:16:57.322 2.098 - 2.110: 45.9634% ( 1715) 00:16:57.322 2.110 - 2.121: 59.1251% ( 1733) 00:16:57.322 2.121 - 2.133: 61.5554% ( 320) 00:16:57.322 2.133 - 2.145: 68.5350% ( 919) 00:16:57.322 2.145 - 2.157: 76.3728% ( 1032) 00:16:57.322 2.157 - 2.169: 77.7246% ( 178) 00:16:57.322 2.169 - 2.181: 83.6029% ( 774) 00:16:57.322 2.181 - 2.193: 87.2332% ( 478) 00:16:57.322 2.193 - 2.204: 88.4180% ( 156) 00:16:57.322 2.204 - 2.216: 89.5268% ( 146) 00:16:57.322 2.216 - 2.228: 91.1293% ( 211) 00:16:57.322 2.228 - 2.240: 92.4584% ( 175) 00:16:57.322 2.240 - 2.252: 93.6280% ( 154) 00:16:57.322 2.252 - 2.264: 94.1824% ( 73) 00:16:57.322 2.264 - 2.276: 94.4406% ( 34) 00:16:57.322 2.276 - 2.287: 94.6533% ( 28) 00:16:57.322 2.287 - 2.299: 94.8584% ( 27) 00:16:57.322 2.299 - 2.311: 95.2153% ( 47) 00:16:57.322 2.311 - 2.323: 95.3824% ( 22) 00:16:57.322 2.323 - 2.335: 95.4583% ( 10) 00:16:57.322 2.335 - 2.347: 95.5115% ( 7) 00:16:57.322 2.347 - 2.359: 95.5571% ( 6) 00:16:57.322 2.359 - 2.370: 95.6178% ( 8) 00:16:57.322 2.370 - 2.382: 95.7318% ( 15) 00:16:57.322 2.382 - 2.394: 95.9520% ( 29) 00:16:57.322 2.394 - 2.406: 96.1950% ( 32) 00:16:57.322 2.406 - 2.418: 96.4077% ( 28) 00:16:57.322 2.418 - 2.430: 96.6735% ( 35) 00:16:57.322 2.430 - 2.441: 96.9013% ( 30) 00:16:57.322 2.441 - 2.453: 97.0456% ( 19) 00:16:57.322 2.453 - 2.465: 97.1899% ( 19) 00:16:57.322 2.465 - 2.477: 97.3191% ( 17) 00:16:57.322 2.477 - 2.489: 97.4937% ( 23) 00:16:57.322 2.489 - 2.501: 97.6380% ( 19) 00:16:57.322 2.501 - 2.513: 97.7140% ( 10) 00:16:57.322 2.513 - 2.524: 97.7899% ( 10) 00:16:57.322 2.524 - 2.536: 97.8507% ( 8) 00:16:57.322 2.536 - 2.548: 97.9190% ( 9) 00:16:57.322 2.548 - 2.560: 97.9570% ( 5) 00:16:57.322 2.560 - 2.572: 98.0026% ( 6) 00:16:57.322 2.572 - 2.584: 98.0406% ( 5) 00:16:57.322 2.584 - 2.596: 98.0709% ( 4) 00:16:57.322 2.596 - 2.607: 98.0785% ( 1) 00:16:57.322 2.607 - 2.619: 98.0861% ( 1) 00:16:57.322 2.631 - 2.643: 98.1013% ( 2) 00:16:57.322 2.643 - 2.655: 98.1089% ( 1) 00:16:57.322 2.667 - 2.679: 98.1241% ( 2) 00:16:57.322 2.679 - 2.690: 98.1393% ( 2) 00:16:57.322 2.690 - 2.702: 98.1621% ( 3) 00:16:57.322 2.702 - 2.714: 98.1773% ( 2) 00:16:57.322 2.714 - 2.726: 98.1925% ( 2) 00:16:57.322 2.738 - 2.750: 98.2228% ( 4) 00:16:57.322 2.750 - 2.761: 98.2380% ( 2) 00:16:57.322 2.761 - 2.773: 98.2456% ( 1) 00:16:57.322 2.773 - 2.785: 98.2532% ( 1) 00:16:57.322 2.797 - 2.809: 98.2760% ( 3) 00:16:57.322 2.809 - 2.821: 98.2988% ( 3) 00:16:57.322 2.821 - 2.833: 98.3292% ( 4) 00:16:57.322 2.833 - 2.844: 98.3443% ( 2) 00:16:57.322 2.844 - 2.856: 98.3519% ( 1) 00:16:57.322 2.856 - 2.868: 98.3671% ( 2) 00:16:57.322 2.868 - 2.880: 9[2024-10-28 15:12:43.758467] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:57.322 8.3975% ( 4) 00:16:57.322 2.880 - 2.892: 98.4051% ( 1) 00:16:57.322 2.892 - 2.904: 98.4127% ( 1) 00:16:57.322 2.939 - 2.951: 98.4279% ( 2) 00:16:57.322 2.987 - 2.999: 98.4355% ( 1) 00:16:57.322 3.010 - 3.022: 98.4431% ( 1) 00:16:57.322 3.022 - 3.034: 98.4507% ( 1) 00:16:57.322 3.058 - 3.081: 98.4659% ( 2) 00:16:57.322 3.081 - 3.105: 98.4735% ( 1) 00:16:57.322 3.105 - 3.129: 98.4811% ( 1) 00:16:57.322 3.129 - 3.153: 98.4886% ( 1) 00:16:57.322 3.153 - 3.176: 98.5038% ( 2) 00:16:57.322 3.176 - 3.200: 98.5114% ( 1) 00:16:57.322 3.200 - 3.224: 98.5342% ( 3) 00:16:57.322 3.247 - 3.271: 98.5494% ( 2) 00:16:57.322 3.271 - 3.295: 98.5646% ( 2) 00:16:57.322 3.413 - 3.437: 98.5722% ( 1) 00:16:57.322 3.437 - 3.461: 98.5798% ( 1) 00:16:57.322 3.508 - 3.532: 98.5950% ( 2) 00:16:57.322 3.556 - 3.579: 98.6026% ( 1) 00:16:57.322 3.579 - 3.603: 98.6178% ( 2) 00:16:57.322 3.603 - 3.627: 98.6254% ( 1) 00:16:57.322 3.627 - 3.650: 98.6481% ( 3) 00:16:57.322 3.698 - 3.721: 98.6709% ( 3) 00:16:57.322 3.721 - 3.745: 98.6861% ( 2) 00:16:57.322 3.745 - 3.769: 98.7013% ( 2) 00:16:57.322 3.769 - 3.793: 98.7089% ( 1) 00:16:57.322 3.816 - 3.840: 98.7165% ( 1) 00:16:57.322 3.840 - 3.864: 98.7241% ( 1) 00:16:57.322 3.887 - 3.911: 98.7317% ( 1) 00:16:57.322 3.959 - 3.982: 98.7393% ( 1) 00:16:57.322 4.006 - 4.030: 98.7469% ( 1) 00:16:57.322 4.101 - 4.124: 98.7545% ( 1) 00:16:57.322 4.172 - 4.196: 98.7621% ( 1) 00:16:57.322 4.219 - 4.243: 98.7697% ( 1) 00:16:57.322 5.191 - 5.215: 98.7772% ( 1) 00:16:57.322 5.570 - 5.594: 98.7848% ( 1) 00:16:57.322 5.831 - 5.855: 98.7924% ( 1) 00:16:57.322 6.210 - 6.258: 98.8000% ( 1) 00:16:57.322 6.637 - 6.684: 98.8152% ( 2) 00:16:57.322 6.874 - 6.921: 98.8228% ( 1) 00:16:57.322 7.159 - 7.206: 98.8304% ( 1) 00:16:57.322 7.348 - 7.396: 98.8380% ( 1) 00:16:57.322 7.490 - 7.538: 98.8456% ( 1) 00:16:57.322 7.775 - 7.822: 98.8532% ( 1) 00:16:57.322 8.107 - 8.154: 98.8608% ( 1) 00:16:57.322 8.249 - 8.296: 98.8684% ( 1) 00:16:57.322 9.481 - 9.529: 98.8760% ( 1) 00:16:57.322 15.550 - 15.644: 98.8836% ( 1) 00:16:57.322 15.644 - 15.739: 98.8912% ( 1) 00:16:57.322 15.739 - 15.834: 98.9140% ( 3) 00:16:57.322 15.834 - 15.929: 98.9291% ( 2) 00:16:57.322 15.929 - 16.024: 98.9671% ( 5) 00:16:57.322 16.024 - 16.119: 99.0279% ( 8) 00:16:57.322 16.119 - 16.213: 99.0583% ( 4) 00:16:57.322 16.213 - 16.308: 99.0734% ( 2) 00:16:57.322 16.308 - 16.403: 99.1038% ( 4) 00:16:57.322 16.403 - 16.498: 99.1494% ( 6) 00:16:57.322 16.498 - 16.593: 99.2026% ( 7) 00:16:57.322 16.593 - 16.687: 99.2177% ( 2) 00:16:57.322 16.687 - 16.782: 99.2405% ( 3) 00:16:57.322 16.782 - 16.877: 99.2709% ( 4) 00:16:57.322 16.877 - 16.972: 99.2937% ( 3) 00:16:57.322 16.972 - 17.067: 99.3089% ( 2) 00:16:57.322 17.067 - 17.161: 99.3165% ( 1) 00:16:57.322 17.256 - 17.351: 99.3241% ( 1) 00:16:57.322 17.351 - 17.446: 99.3317% ( 1) 00:16:57.322 17.446 - 17.541: 99.3393% ( 1) 00:16:57.322 17.636 - 17.730: 99.3469% ( 1) 00:16:57.322 17.825 - 17.920: 99.3620% ( 2) 00:16:57.322 17.920 - 18.015: 99.3696% ( 1) 00:16:57.322 18.204 - 18.299: 99.3772% ( 1) 00:16:57.322 18.394 - 18.489: 99.3848% ( 1) 00:16:57.322 2014.625 - 2026.761: 99.3924% ( 1) 00:16:57.322 2026.761 - 2038.898: 99.4000% ( 1) 00:16:57.322 2038.898 - 2051.034: 99.4076% ( 1) 00:16:57.322 2148.124 - 2160.261: 99.4152% ( 1) 00:16:57.322 3980.705 - 4004.978: 99.8709% ( 60) 00:16:57.322 4004.978 - 4029.250: 99.9696% ( 13) 00:16:57.322 5995.330 - 6019.603: 100.0000% ( 4) 00:16:57.322 00:16:57.322 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:57.322 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:57.322 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:57.322 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:57.322 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:57.322 [ 00:16:57.322 { 00:16:57.322 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:57.322 "subtype": "Discovery", 00:16:57.322 "listen_addresses": [], 00:16:57.322 "allow_any_host": true, 00:16:57.322 "hosts": [] 00:16:57.322 }, 00:16:57.322 { 00:16:57.322 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:57.322 "subtype": "NVMe", 00:16:57.322 "listen_addresses": [ 00:16:57.322 { 00:16:57.322 "trtype": "VFIOUSER", 00:16:57.322 "adrfam": "IPv4", 00:16:57.322 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:57.322 "trsvcid": "0" 00:16:57.322 } 00:16:57.322 ], 00:16:57.322 "allow_any_host": true, 00:16:57.322 "hosts": [], 00:16:57.322 "serial_number": "SPDK1", 00:16:57.322 "model_number": "SPDK bdev Controller", 00:16:57.322 "max_namespaces": 32, 00:16:57.322 "min_cntlid": 1, 00:16:57.322 "max_cntlid": 65519, 00:16:57.322 "namespaces": [ 00:16:57.322 { 00:16:57.322 "nsid": 1, 00:16:57.322 "bdev_name": "Malloc1", 00:16:57.322 "name": "Malloc1", 00:16:57.322 "nguid": "421D06F72AAD498F978C549374B5B53B", 00:16:57.322 "uuid": "421d06f7-2aad-498f-978c-549374b5b53b" 00:16:57.322 }, 00:16:57.322 { 00:16:57.322 "nsid": 2, 00:16:57.322 "bdev_name": "Malloc3", 00:16:57.322 "name": "Malloc3", 00:16:57.322 "nguid": "2CA2B7745D39428FAD9A6F4D7ADCC0E4", 00:16:57.323 "uuid": "2ca2b774-5d39-428f-ad9a-6f4d7adcc0e4" 00:16:57.323 } 00:16:57.323 ] 00:16:57.323 }, 00:16:57.323 { 00:16:57.323 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:57.323 "subtype": "NVMe", 00:16:57.323 "listen_addresses": [ 00:16:57.323 { 00:16:57.323 "trtype": "VFIOUSER", 00:16:57.323 "adrfam": "IPv4", 00:16:57.323 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:57.323 "trsvcid": "0" 00:16:57.323 } 00:16:57.323 ], 00:16:57.323 "allow_any_host": true, 00:16:57.323 "hosts": [], 00:16:57.323 "serial_number": "SPDK2", 00:16:57.323 "model_number": "SPDK bdev Controller", 00:16:57.323 "max_namespaces": 32, 00:16:57.323 "min_cntlid": 1, 00:16:57.323 "max_cntlid": 65519, 00:16:57.323 "namespaces": [ 00:16:57.323 { 00:16:57.323 "nsid": 1, 00:16:57.323 "bdev_name": "Malloc2", 00:16:57.323 "name": "Malloc2", 00:16:57.323 "nguid": "477C78A0D3F74CA48ED84B20FA876AC2", 00:16:57.323 "uuid": "477c78a0-d3f7-4ca4-8ed8-4b20fa876ac2" 00:16:57.323 } 00:16:57.323 ] 00:16:57.323 } 00:16:57.323 ] 00:16:57.323 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:57.323 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3158129 00:16:57.323 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:57.323 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:57.323 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:16:57.323 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:57.323 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:57.323 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:16:57.323 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:57.323 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:57.581 [2024-10-28 15:12:44.318519] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:57.840 Malloc4 00:16:57.840 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:58.098 [2024-10-28 15:12:44.833287] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:58.098 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:58.098 Asynchronous Event Request test 00:16:58.098 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:58.098 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:58.098 Registering asynchronous event callbacks... 00:16:58.098 Starting namespace attribute notice tests for all controllers... 00:16:58.098 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:58.098 aer_cb - Changed Namespace 00:16:58.098 Cleaning up... 00:16:58.383 [ 00:16:58.383 { 00:16:58.383 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:58.383 "subtype": "Discovery", 00:16:58.383 "listen_addresses": [], 00:16:58.383 "allow_any_host": true, 00:16:58.383 "hosts": [] 00:16:58.383 }, 00:16:58.383 { 00:16:58.383 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:58.383 "subtype": "NVMe", 00:16:58.383 "listen_addresses": [ 00:16:58.383 { 00:16:58.383 "trtype": "VFIOUSER", 00:16:58.383 "adrfam": "IPv4", 00:16:58.383 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:58.383 "trsvcid": "0" 00:16:58.383 } 00:16:58.383 ], 00:16:58.383 "allow_any_host": true, 00:16:58.383 "hosts": [], 00:16:58.383 "serial_number": "SPDK1", 00:16:58.383 "model_number": "SPDK bdev Controller", 00:16:58.383 "max_namespaces": 32, 00:16:58.383 "min_cntlid": 1, 00:16:58.383 "max_cntlid": 65519, 00:16:58.383 "namespaces": [ 00:16:58.383 { 00:16:58.383 "nsid": 1, 00:16:58.383 "bdev_name": "Malloc1", 00:16:58.383 "name": "Malloc1", 00:16:58.383 "nguid": "421D06F72AAD498F978C549374B5B53B", 00:16:58.383 "uuid": "421d06f7-2aad-498f-978c-549374b5b53b" 00:16:58.383 }, 00:16:58.383 { 00:16:58.383 "nsid": 2, 00:16:58.383 "bdev_name": "Malloc3", 00:16:58.383 "name": "Malloc3", 00:16:58.383 "nguid": "2CA2B7745D39428FAD9A6F4D7ADCC0E4", 00:16:58.383 "uuid": "2ca2b774-5d39-428f-ad9a-6f4d7adcc0e4" 00:16:58.383 } 00:16:58.383 ] 00:16:58.383 }, 00:16:58.383 { 00:16:58.383 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:58.383 "subtype": "NVMe", 00:16:58.383 "listen_addresses": [ 00:16:58.383 { 00:16:58.383 "trtype": "VFIOUSER", 00:16:58.383 "adrfam": "IPv4", 00:16:58.383 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:58.383 "trsvcid": "0" 00:16:58.383 } 00:16:58.383 ], 00:16:58.383 "allow_any_host": true, 00:16:58.383 "hosts": [], 00:16:58.383 "serial_number": "SPDK2", 00:16:58.383 "model_number": "SPDK bdev Controller", 00:16:58.383 "max_namespaces": 32, 00:16:58.383 "min_cntlid": 1, 00:16:58.383 "max_cntlid": 65519, 00:16:58.383 "namespaces": [ 00:16:58.383 { 00:16:58.383 "nsid": 1, 00:16:58.383 "bdev_name": "Malloc2", 00:16:58.383 "name": "Malloc2", 00:16:58.383 "nguid": "477C78A0D3F74CA48ED84B20FA876AC2", 00:16:58.383 "uuid": "477c78a0-d3f7-4ca4-8ed8-4b20fa876ac2" 00:16:58.383 }, 00:16:58.383 { 00:16:58.383 "nsid": 2, 00:16:58.383 "bdev_name": "Malloc4", 00:16:58.383 "name": "Malloc4", 00:16:58.383 "nguid": "5944712CF2C84A17A5B612598AEF7532", 00:16:58.383 "uuid": "5944712c-f2c8-4a17-a5b6-12598aef7532" 00:16:58.383 } 00:16:58.383 ] 00:16:58.383 } 00:16:58.383 ] 00:16:58.383 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3158129 00:16:58.383 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:58.383 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3151811 00:16:58.383 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 3151811 ']' 00:16:58.383 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 3151811 00:16:58.383 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:16:58.383 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:58.383 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3151811 00:16:58.667 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:58.667 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:58.667 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3151811' 00:16:58.667 killing process with pid 3151811 00:16:58.667 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 3151811 00:16:58.667 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 3151811 00:16:58.928 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:58.928 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:58.928 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:58.928 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:58.928 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:58.928 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3158283 00:16:58.928 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:58.928 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3158283' 00:16:58.928 Process pid: 3158283 00:16:58.928 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:58.928 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3158283 00:16:58.928 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 3158283 ']' 00:16:58.928 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.928 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:58.928 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.928 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:58.928 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:58.928 [2024-10-28 15:12:45.779080] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:58.928 [2024-10-28 15:12:45.781721] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:16:58.928 [2024-10-28 15:12:45.781859] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.189 [2024-10-28 15:12:45.935432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:59.189 [2024-10-28 15:12:46.054982] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:59.189 [2024-10-28 15:12:46.055084] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:59.189 [2024-10-28 15:12:46.055122] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:59.189 [2024-10-28 15:12:46.055152] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:59.189 [2024-10-28 15:12:46.055193] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:59.449 [2024-10-28 15:12:46.058122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.449 [2024-10-28 15:12:46.058218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:59.449 [2024-10-28 15:12:46.058312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:59.449 [2024-10-28 15:12:46.058316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.449 [2024-10-28 15:12:46.221981] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:59.449 [2024-10-28 15:12:46.222418] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:59.449 [2024-10-28 15:12:46.222733] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:59.449 [2024-10-28 15:12:46.223725] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:59.449 [2024-10-28 15:12:46.224092] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:17:00.391 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:00.391 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:17:00.391 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:01.331 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:01.900 15:12:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:01.900 15:12:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:01.900 15:12:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:01.900 15:12:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:01.900 15:12:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:02.161 Malloc1 00:17:02.161 15:12:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:03.096 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:03.354 15:12:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:03.612 15:12:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:03.612 15:12:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:03.612 15:12:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:03.870 Malloc2 00:17:03.870 15:12:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:04.436 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:04.694 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:04.953 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:04.953 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3158283 00:17:04.953 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 3158283 ']' 00:17:04.953 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 3158283 00:17:04.953 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:17:04.953 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:04.953 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3158283 00:17:04.953 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:04.953 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:04.953 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3158283' 00:17:04.953 killing process with pid 3158283 00:17:04.953 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 3158283 00:17:04.953 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 3158283 00:17:05.212 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:05.212 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:05.212 00:17:05.212 real 0m58.348s 00:17:05.212 user 3m40.107s 00:17:05.212 sys 0m5.079s 00:17:05.212 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:05.212 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:05.212 ************************************ 00:17:05.212 END TEST nvmf_vfio_user 00:17:05.212 ************************************ 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:05.473 ************************************ 00:17:05.473 START TEST nvmf_vfio_user_nvme_compliance 00:17:05.473 ************************************ 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:05.473 * Looking for test storage... 00:17:05.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1689 -- # lcov --version 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:17:05.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.473 --rc genhtml_branch_coverage=1 00:17:05.473 --rc genhtml_function_coverage=1 00:17:05.473 --rc genhtml_legend=1 00:17:05.473 --rc geninfo_all_blocks=1 00:17:05.473 --rc geninfo_unexecuted_blocks=1 00:17:05.473 00:17:05.473 ' 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:17:05.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.473 --rc genhtml_branch_coverage=1 00:17:05.473 --rc genhtml_function_coverage=1 00:17:05.473 --rc genhtml_legend=1 00:17:05.473 --rc geninfo_all_blocks=1 00:17:05.473 --rc geninfo_unexecuted_blocks=1 00:17:05.473 00:17:05.473 ' 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:17:05.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.473 --rc genhtml_branch_coverage=1 00:17:05.473 --rc genhtml_function_coverage=1 00:17:05.473 --rc genhtml_legend=1 00:17:05.473 --rc geninfo_all_blocks=1 00:17:05.473 --rc geninfo_unexecuted_blocks=1 00:17:05.473 00:17:05.473 ' 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:17:05.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.473 --rc genhtml_branch_coverage=1 00:17:05.473 --rc genhtml_function_coverage=1 00:17:05.473 --rc genhtml_legend=1 00:17:05.473 --rc geninfo_all_blocks=1 00:17:05.473 --rc geninfo_unexecuted_blocks=1 00:17:05.473 00:17:05.473 ' 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:05.473 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:05.474 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:05.474 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:05.474 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:05.474 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:05.474 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:05.474 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:05.474 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:05.474 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:05.474 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:05.735 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3159144 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3159144' 00:17:05.735 Process pid: 3159144 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3159144 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 3159144 ']' 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:05.735 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:05.735 [2024-10-28 15:12:52.408279] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:17:05.735 [2024-10-28 15:12:52.408390] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.735 [2024-10-28 15:12:52.541388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:05.995 [2024-10-28 15:12:52.661076] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.995 [2024-10-28 15:12:52.661174] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.996 [2024-10-28 15:12:52.661211] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:05.996 [2024-10-28 15:12:52.661241] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:05.996 [2024-10-28 15:12:52.661267] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.996 [2024-10-28 15:12:52.663907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.996 [2024-10-28 15:12:52.664031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:05.996 [2024-10-28 15:12:52.664041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.931 15:12:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:06.931 15:12:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:17:06.931 15:12:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:08.309 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:08.309 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:08.309 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:08.309 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.309 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:08.309 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.309 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:08.309 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:08.309 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.309 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:08.309 malloc0 00:17:08.309 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.309 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:08.309 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.309 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:08.309 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.309 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:08.309 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.309 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:08.309 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.309 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:08.309 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.309 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:08.309 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.309 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:08.309 00:17:08.309 00:17:08.309 CUnit - A unit testing framework for C - Version 2.1-3 00:17:08.309 http://cunit.sourceforge.net/ 00:17:08.309 00:17:08.309 00:17:08.309 Suite: nvme_compliance 00:17:08.309 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-28 15:12:55.137732] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:08.309 [2024-10-28 15:12:55.139671] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:08.309 [2024-10-28 15:12:55.139723] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:08.309 [2024-10-28 15:12:55.139745] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:08.309 [2024-10-28 15:12:55.141811] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:08.567 passed 00:17:08.567 Test: admin_identify_ctrlr_verify_fused ...[2024-10-28 15:12:55.277233] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:08.567 [2024-10-28 15:12:55.280269] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:08.567 passed 00:17:08.567 Test: admin_identify_ns ...[2024-10-28 15:12:55.421373] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:08.826 [2024-10-28 15:12:55.481708] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:08.826 [2024-10-28 15:12:55.489697] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:08.826 [2024-10-28 15:12:55.510845] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:08.826 passed 00:17:08.826 Test: admin_get_features_mandatory_features ...[2024-10-28 15:12:55.640918] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:08.826 [2024-10-28 15:12:55.645954] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:09.084 passed 00:17:09.084 Test: admin_get_features_optional_features ...[2024-10-28 15:12:55.778511] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:09.084 [2024-10-28 15:12:55.781569] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:09.084 passed 00:17:09.084 Test: admin_set_features_number_of_queues ...[2024-10-28 15:12:55.917530] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:09.342 [2024-10-28 15:12:56.020030] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:09.342 passed 00:17:09.342 Test: admin_get_log_page_mandatory_logs ...[2024-10-28 15:12:56.153862] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:09.342 [2024-10-28 15:12:56.156888] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:09.600 passed 00:17:09.600 Test: admin_get_log_page_with_lpo ...[2024-10-28 15:12:56.287418] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:09.600 [2024-10-28 15:12:56.356715] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:09.600 [2024-10-28 15:12:56.369795] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:09.600 passed 00:17:09.858 Test: fabric_property_get ...[2024-10-28 15:12:56.502924] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:09.858 [2024-10-28 15:12:56.504724] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:09.858 [2024-10-28 15:12:56.506022] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:09.858 passed 00:17:09.858 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-28 15:12:56.640246] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:09.858 [2024-10-28 15:12:56.641841] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:09.858 [2024-10-28 15:12:56.646297] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:09.858 passed 00:17:10.116 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-28 15:12:56.775124] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:10.116 [2024-10-28 15:12:56.862711] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:10.116 [2024-10-28 15:12:56.878712] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:10.116 [2024-10-28 15:12:56.883830] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:10.116 passed 00:17:10.375 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-28 15:12:57.013518] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:10.375 [2024-10-28 15:12:57.015021] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:10.375 [2024-10-28 15:12:57.019580] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:10.375 passed 00:17:10.375 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-28 15:12:57.150468] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:10.375 [2024-10-28 15:12:57.227724] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:10.633 [2024-10-28 15:12:57.251704] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:10.633 [2024-10-28 15:12:57.256846] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:10.633 passed 00:17:10.633 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-28 15:12:57.393470] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:10.633 [2024-10-28 15:12:57.395037] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:10.633 [2024-10-28 15:12:57.395134] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:10.633 [2024-10-28 15:12:57.396520] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:10.633 passed 00:17:10.891 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-28 15:12:57.527728] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:10.891 [2024-10-28 15:12:57.621698] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:10.891 [2024-10-28 15:12:57.629674] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:10.891 [2024-10-28 15:12:57.637692] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:10.891 [2024-10-28 15:12:57.645701] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:10.891 [2024-10-28 15:12:57.674895] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:10.891 passed 00:17:11.149 Test: admin_create_io_sq_verify_pc ...[2024-10-28 15:12:57.804176] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:11.149 [2024-10-28 15:12:57.822718] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:11.149 [2024-10-28 15:12:57.839891] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:11.149 passed 00:17:11.149 Test: admin_create_io_qp_max_qps ...[2024-10-28 15:12:57.971248] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:12.524 [2024-10-28 15:12:59.071706] nvme_ctrlr.c:5487:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:17:12.782 [2024-10-28 15:12:59.457544] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:12.782 passed 00:17:12.782 Test: admin_create_io_sq_shared_cq ...[2024-10-28 15:12:59.592591] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:13.040 [2024-10-28 15:12:59.725674] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:13.040 [2024-10-28 15:12:59.762781] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:13.040 passed 00:17:13.040 00:17:13.040 Run Summary: Type Total Ran Passed Failed Inactive 00:17:13.040 suites 1 1 n/a 0 0 00:17:13.040 tests 18 18 18 0 0 00:17:13.040 asserts 360 360 360 0 n/a 00:17:13.040 00:17:13.040 Elapsed time = 2.013 seconds 00:17:13.040 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3159144 00:17:13.040 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 3159144 ']' 00:17:13.040 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 3159144 00:17:13.040 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:17:13.040 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:13.040 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3159144 00:17:13.298 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:13.298 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:13.298 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3159144' 00:17:13.298 killing process with pid 3159144 00:17:13.298 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 3159144 00:17:13.298 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 3159144 00:17:13.560 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:13.560 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:13.560 00:17:13.560 real 0m8.104s 00:17:13.560 user 0m22.952s 00:17:13.560 sys 0m0.842s 00:17:13.560 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:13.560 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:13.560 ************************************ 00:17:13.560 END TEST nvmf_vfio_user_nvme_compliance 00:17:13.560 ************************************ 00:17:13.560 15:13:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:13.560 15:13:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:13.560 15:13:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:13.560 15:13:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:13.560 ************************************ 00:17:13.560 START TEST nvmf_vfio_user_fuzz 00:17:13.560 ************************************ 00:17:13.560 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:13.560 * Looking for test storage... 00:17:13.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:13.560 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:17:13.560 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1689 -- # lcov --version 00:17:13.560 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:17:13.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.820 --rc genhtml_branch_coverage=1 00:17:13.820 --rc genhtml_function_coverage=1 00:17:13.820 --rc genhtml_legend=1 00:17:13.820 --rc geninfo_all_blocks=1 00:17:13.820 --rc geninfo_unexecuted_blocks=1 00:17:13.820 00:17:13.820 ' 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:17:13.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.820 --rc genhtml_branch_coverage=1 00:17:13.820 --rc genhtml_function_coverage=1 00:17:13.820 --rc genhtml_legend=1 00:17:13.820 --rc geninfo_all_blocks=1 00:17:13.820 --rc geninfo_unexecuted_blocks=1 00:17:13.820 00:17:13.820 ' 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:17:13.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.820 --rc genhtml_branch_coverage=1 00:17:13.820 --rc genhtml_function_coverage=1 00:17:13.820 --rc genhtml_legend=1 00:17:13.820 --rc geninfo_all_blocks=1 00:17:13.820 --rc geninfo_unexecuted_blocks=1 00:17:13.820 00:17:13.820 ' 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:17:13.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.820 --rc genhtml_branch_coverage=1 00:17:13.820 --rc genhtml_function_coverage=1 00:17:13.820 --rc genhtml_legend=1 00:17:13.820 --rc geninfo_all_blocks=1 00:17:13.820 --rc geninfo_unexecuted_blocks=1 00:17:13.820 00:17:13.820 ' 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.820 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:13.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3160133 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3160133' 00:17:13.821 Process pid: 3160133 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3160133 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 3160133 ']' 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:13.821 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:14.387 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:14.387 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:17:14.387 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:15.328 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:15.328 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.328 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:15.328 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.328 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:15.328 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:15.328 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.328 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:15.328 malloc0 00:17:15.328 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.328 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:15.328 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.328 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:15.328 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.328 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:15.328 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.328 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:15.328 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.328 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:15.328 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.328 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:15.328 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.328 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:15.328 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:47.427 Fuzzing completed. Shutting down the fuzz application 00:17:47.427 00:17:47.427 Dumping successful admin opcodes: 00:17:47.427 8, 9, 10, 24, 00:17:47.427 Dumping successful io opcodes: 00:17:47.427 0, 00:17:47.427 NS: 0x20000081ef00 I/O qp, Total commands completed: 273388, total successful commands: 1077, random_seed: 1733944640 00:17:47.427 NS: 0x20000081ef00 admin qp, Total commands completed: 40958, total successful commands: 333, random_seed: 2390741120 00:17:47.427 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:47.427 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.427 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:47.427 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.427 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3160133 00:17:47.427 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 3160133 ']' 00:17:47.427 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 3160133 00:17:47.427 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:17:47.427 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:47.427 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3160133 00:17:47.427 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:47.427 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:47.427 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3160133' 00:17:47.427 killing process with pid 3160133 00:17:47.427 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 3160133 00:17:47.427 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 3160133 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:47.427 00:17:47.427 real 0m33.043s 00:17:47.427 user 0m32.761s 00:17:47.427 sys 0m29.103s 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:47.427 ************************************ 00:17:47.427 END TEST nvmf_vfio_user_fuzz 00:17:47.427 ************************************ 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:47.427 ************************************ 00:17:47.427 START TEST nvmf_auth_target 00:17:47.427 ************************************ 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:47.427 * Looking for test storage... 00:17:47.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1689 -- # lcov --version 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:17:47.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.427 --rc genhtml_branch_coverage=1 00:17:47.427 --rc genhtml_function_coverage=1 00:17:47.427 --rc genhtml_legend=1 00:17:47.427 --rc geninfo_all_blocks=1 00:17:47.427 --rc geninfo_unexecuted_blocks=1 00:17:47.427 00:17:47.427 ' 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:17:47.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.427 --rc genhtml_branch_coverage=1 00:17:47.427 --rc genhtml_function_coverage=1 00:17:47.427 --rc genhtml_legend=1 00:17:47.427 --rc geninfo_all_blocks=1 00:17:47.427 --rc geninfo_unexecuted_blocks=1 00:17:47.427 00:17:47.427 ' 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:17:47.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.427 --rc genhtml_branch_coverage=1 00:17:47.427 --rc genhtml_function_coverage=1 00:17:47.427 --rc genhtml_legend=1 00:17:47.427 --rc geninfo_all_blocks=1 00:17:47.427 --rc geninfo_unexecuted_blocks=1 00:17:47.427 00:17:47.427 ' 00:17:47.427 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:17:47.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.427 --rc genhtml_branch_coverage=1 00:17:47.427 --rc genhtml_function_coverage=1 00:17:47.428 --rc genhtml_legend=1 00:17:47.428 --rc geninfo_all_blocks=1 00:17:47.428 --rc geninfo_unexecuted_blocks=1 00:17:47.428 00:17:47.428 ' 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:47.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:17:47.428 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:50.002 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:50.002 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:50.002 Found net devices under 0000:84:00.0: cvl_0_0 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:50.002 Found net devices under 0000:84:00.1: cvl_0_1 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:50.002 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:50.003 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:50.003 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:50.003 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:50.003 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:50.003 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:50.003 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:50.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:50.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:17:50.003 00:17:50.003 --- 10.0.0.2 ping statistics --- 00:17:50.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.003 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:17:50.003 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:50.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:50.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:17:50.003 00:17:50.003 --- 10.0.0.1 ping statistics --- 00:17:50.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.003 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:17:50.003 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:50.003 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:17:50.003 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:50.003 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:50.003 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:50.003 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:50.003 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:50.003 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:50.003 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:50.003 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:17:50.003 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:50.003 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:50.003 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.003 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3165587 00:17:50.003 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:50.003 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3165587 00:17:50.003 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3165587 ']' 00:17:50.003 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.003 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:50.003 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.003 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:50.003 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3165707 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9db11c7ca157bc2ff4b5ad8bd1207e9d5486b3dabb0c53e6 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.NGG 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9db11c7ca157bc2ff4b5ad8bd1207e9d5486b3dabb0c53e6 0 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9db11c7ca157bc2ff4b5ad8bd1207e9d5486b3dabb0c53e6 0 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9db11c7ca157bc2ff4b5ad8bd1207e9d5486b3dabb0c53e6 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.NGG 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.NGG 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.NGG 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1313b9d5e160c9456c71d89ca6c96b7a89b5f162f25ef311fe7272978cffbc7d 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.nNK 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1313b9d5e160c9456c71d89ca6c96b7a89b5f162f25ef311fe7272978cffbc7d 3 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1313b9d5e160c9456c71d89ca6c96b7a89b5f162f25ef311fe7272978cffbc7d 3 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1313b9d5e160c9456c71d89ca6c96b7a89b5f162f25ef311fe7272978cffbc7d 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.nNK 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.nNK 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.nNK 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:50.602 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:50.603 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:50.603 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:50.603 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:50.603 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4990c55195b55daf60101397c54577b0 00:17:50.603 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:50.603 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.QHV 00:17:50.603 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4990c55195b55daf60101397c54577b0 1 00:17:50.603 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4990c55195b55daf60101397c54577b0 1 00:17:50.603 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:50.603 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:50.603 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4990c55195b55daf60101397c54577b0 00:17:50.603 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:50.603 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.QHV 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.QHV 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.QHV 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b91209042462d3ad3f98e95ba7ebd0c7693511556b224b0e 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.HuT 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b91209042462d3ad3f98e95ba7ebd0c7693511556b224b0e 2 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b91209042462d3ad3f98e95ba7ebd0c7693511556b224b0e 2 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b91209042462d3ad3f98e95ba7ebd0c7693511556b224b0e 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.HuT 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.HuT 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.HuT 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a1a2857b5556bf22ac69845e1b36ee093149110548128c75 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.lJT 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a1a2857b5556bf22ac69845e1b36ee093149110548128c75 2 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a1a2857b5556bf22ac69845e1b36ee093149110548128c75 2 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a1a2857b5556bf22ac69845e1b36ee093149110548128c75 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.lJT 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.lJT 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.lJT 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cfd12fdac5d3e389964a6d2d3c186a62 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ONt 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cfd12fdac5d3e389964a6d2d3c186a62 1 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cfd12fdac5d3e389964a6d2d3c186a62 1 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cfd12fdac5d3e389964a6d2d3c186a62 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:50.863 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ONt 00:17:50.864 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ONt 00:17:50.864 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.ONt 00:17:50.864 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:17:50.864 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:50.864 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:50.864 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:50.864 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:50.864 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:50.864 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:50.864 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e2c0f84d605b0e5b9415776d639e4aa89a4e70220024494911b22c77a8cd1c04 00:17:50.864 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:50.864 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.aV8 00:17:50.864 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e2c0f84d605b0e5b9415776d639e4aa89a4e70220024494911b22c77a8cd1c04 3 00:17:50.864 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e2c0f84d605b0e5b9415776d639e4aa89a4e70220024494911b22c77a8cd1c04 3 00:17:50.864 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:50.864 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:50.864 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e2c0f84d605b0e5b9415776d639e4aa89a4e70220024494911b22c77a8cd1c04 00:17:50.864 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:50.864 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:51.123 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.aV8 00:17:51.123 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.aV8 00:17:51.123 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.aV8 00:17:51.123 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:17:51.123 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3165587 00:17:51.123 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3165587 ']' 00:17:51.123 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.123 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:51.123 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.123 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:51.123 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.691 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:51.691 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:51.691 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3165707 /var/tmp/host.sock 00:17:51.691 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3165707 ']' 00:17:51.691 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:17:51.691 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:51.691 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:51.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:51.691 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:51.691 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.950 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:51.950 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:51.950 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:17:51.950 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.950 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.950 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.950 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:51.950 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.NGG 00:17:51.950 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.950 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.950 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.950 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.NGG 00:17:51.950 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.NGG 00:17:52.519 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.nNK ]] 00:17:52.519 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nNK 00:17:52.519 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.519 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.519 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.519 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nNK 00:17:52.519 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nNK 00:17:53.086 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:53.086 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.QHV 00:17:53.086 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.086 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.086 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.086 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.QHV 00:17:53.086 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.QHV 00:17:53.655 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.HuT ]] 00:17:53.655 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.HuT 00:17:53.655 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.655 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.655 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.655 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.HuT 00:17:53.655 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.HuT 00:17:54.225 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:54.225 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.lJT 00:17:54.225 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.225 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.225 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.225 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.lJT 00:17:54.225 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.lJT 00:17:54.795 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.ONt ]] 00:17:54.795 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ONt 00:17:54.795 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.795 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.795 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.795 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ONt 00:17:54.795 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ONt 00:17:55.053 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:55.053 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.aV8 00:17:55.053 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.053 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.053 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.053 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.aV8 00:17:55.054 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.aV8 00:17:55.623 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:17:55.623 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:55.623 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.623 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.623 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:55.623 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:55.884 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:17:55.884 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.884 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:55.884 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:55.884 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:55.884 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.884 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.884 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.884 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.884 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.884 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.884 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.884 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.144 00:17:56.406 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.406 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.406 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.975 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.975 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.975 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.975 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.975 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.975 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.975 { 00:17:56.975 "cntlid": 1, 00:17:56.975 "qid": 0, 00:17:56.975 "state": "enabled", 00:17:56.975 "thread": "nvmf_tgt_poll_group_000", 00:17:56.975 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:56.975 "listen_address": { 00:17:56.975 "trtype": "TCP", 00:17:56.975 "adrfam": "IPv4", 00:17:56.975 "traddr": "10.0.0.2", 00:17:56.975 "trsvcid": "4420" 00:17:56.975 }, 00:17:56.975 "peer_address": { 00:17:56.975 "trtype": "TCP", 00:17:56.975 "adrfam": "IPv4", 00:17:56.975 "traddr": "10.0.0.1", 00:17:56.975 "trsvcid": "47798" 00:17:56.975 }, 00:17:56.975 "auth": { 00:17:56.975 "state": "completed", 00:17:56.975 "digest": "sha256", 00:17:56.975 "dhgroup": "null" 00:17:56.975 } 00:17:56.975 } 00:17:56.975 ]' 00:17:56.975 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.975 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:56.975 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.233 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:57.233 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.233 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.233 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.234 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.802 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:17:57.803 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:17:59.709 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.709 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:59.709 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.709 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.709 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.709 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.709 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:59.709 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:00.279 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:18:00.279 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.279 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:00.279 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:00.279 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:00.279 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.279 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.279 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.279 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.279 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.279 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.279 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.279 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.538 00:18:00.538 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.538 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.538 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.106 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.106 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.106 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.106 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.106 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.106 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.106 { 00:18:01.106 "cntlid": 3, 00:18:01.106 "qid": 0, 00:18:01.106 "state": "enabled", 00:18:01.106 "thread": "nvmf_tgt_poll_group_000", 00:18:01.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:01.106 "listen_address": { 00:18:01.106 "trtype": "TCP", 00:18:01.106 "adrfam": "IPv4", 00:18:01.106 "traddr": "10.0.0.2", 00:18:01.106 "trsvcid": "4420" 00:18:01.106 }, 00:18:01.106 "peer_address": { 00:18:01.106 "trtype": "TCP", 00:18:01.106 "adrfam": "IPv4", 00:18:01.106 "traddr": "10.0.0.1", 00:18:01.106 "trsvcid": "50710" 00:18:01.106 }, 00:18:01.106 "auth": { 00:18:01.106 "state": "completed", 00:18:01.106 "digest": "sha256", 00:18:01.106 "dhgroup": "null" 00:18:01.106 } 00:18:01.106 } 00:18:01.106 ]' 00:18:01.107 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.107 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:01.107 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.107 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:01.107 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.107 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.107 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.107 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.366 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:18:01.366 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:18:03.275 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.275 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:03.275 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.275 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.275 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.275 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.275 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:03.275 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:03.534 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:18:03.535 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.535 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:03.535 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:03.535 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:03.535 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.535 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.535 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.535 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.535 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.535 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.535 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.535 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.102 00:18:04.102 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.102 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.102 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.360 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.360 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.360 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.360 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.360 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.360 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.360 { 00:18:04.360 "cntlid": 5, 00:18:04.360 "qid": 0, 00:18:04.360 "state": "enabled", 00:18:04.360 "thread": "nvmf_tgt_poll_group_000", 00:18:04.360 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:04.360 "listen_address": { 00:18:04.360 "trtype": "TCP", 00:18:04.360 "adrfam": "IPv4", 00:18:04.360 "traddr": "10.0.0.2", 00:18:04.360 "trsvcid": "4420" 00:18:04.360 }, 00:18:04.360 "peer_address": { 00:18:04.360 "trtype": "TCP", 00:18:04.360 "adrfam": "IPv4", 00:18:04.360 "traddr": "10.0.0.1", 00:18:04.360 "trsvcid": "50730" 00:18:04.360 }, 00:18:04.360 "auth": { 00:18:04.360 "state": "completed", 00:18:04.360 "digest": "sha256", 00:18:04.360 "dhgroup": "null" 00:18:04.360 } 00:18:04.360 } 00:18:04.360 ]' 00:18:04.360 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.360 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:04.360 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.360 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:04.360 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.620 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.620 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.620 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.881 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:18:04.881 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:18:06.790 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.790 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:06.790 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.790 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.790 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.790 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.790 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:06.790 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:07.360 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:18:07.360 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.360 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:07.360 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:07.360 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:07.360 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.360 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:07.360 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.360 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.360 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.360 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:07.360 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:07.360 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:07.928 00:18:07.928 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.928 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.928 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.188 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.188 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.188 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.188 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.188 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.188 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.188 { 00:18:08.188 "cntlid": 7, 00:18:08.188 "qid": 0, 00:18:08.188 "state": "enabled", 00:18:08.188 "thread": "nvmf_tgt_poll_group_000", 00:18:08.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:08.188 "listen_address": { 00:18:08.188 "trtype": "TCP", 00:18:08.188 "adrfam": "IPv4", 00:18:08.188 "traddr": "10.0.0.2", 00:18:08.188 "trsvcid": "4420" 00:18:08.188 }, 00:18:08.188 "peer_address": { 00:18:08.188 "trtype": "TCP", 00:18:08.188 "adrfam": "IPv4", 00:18:08.188 "traddr": "10.0.0.1", 00:18:08.188 "trsvcid": "36348" 00:18:08.188 }, 00:18:08.188 "auth": { 00:18:08.188 "state": "completed", 00:18:08.188 "digest": "sha256", 00:18:08.188 "dhgroup": "null" 00:18:08.188 } 00:18:08.188 } 00:18:08.188 ]' 00:18:08.188 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.188 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.188 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.188 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:08.188 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.188 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.188 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.188 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.756 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:18:08.756 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:18:10.135 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.393 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:10.393 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.393 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.393 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.393 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:10.393 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.393 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:10.393 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:10.651 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:18:10.651 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.651 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:10.651 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:10.651 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:10.651 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.651 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.651 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.651 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.651 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.651 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.651 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.652 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.220 00:18:11.220 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.220 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.221 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.788 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.788 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.788 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.788 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.788 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.788 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.788 { 00:18:11.788 "cntlid": 9, 00:18:11.789 "qid": 0, 00:18:11.789 "state": "enabled", 00:18:11.789 "thread": "nvmf_tgt_poll_group_000", 00:18:11.789 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:11.789 "listen_address": { 00:18:11.789 "trtype": "TCP", 00:18:11.789 "adrfam": "IPv4", 00:18:11.789 "traddr": "10.0.0.2", 00:18:11.789 "trsvcid": "4420" 00:18:11.789 }, 00:18:11.789 "peer_address": { 00:18:11.789 "trtype": "TCP", 00:18:11.789 "adrfam": "IPv4", 00:18:11.789 "traddr": "10.0.0.1", 00:18:11.789 "trsvcid": "36374" 00:18:11.789 }, 00:18:11.789 "auth": { 00:18:11.789 "state": "completed", 00:18:11.789 "digest": "sha256", 00:18:11.789 "dhgroup": "ffdhe2048" 00:18:11.789 } 00:18:11.789 } 00:18:11.789 ]' 00:18:11.789 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.789 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:11.789 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.789 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:11.789 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.789 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.789 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.789 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.355 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:18:12.355 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:18:14.262 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.262 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:14.262 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.262 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.262 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.262 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.262 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:14.263 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:14.834 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:18:14.834 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.834 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:14.834 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:14.834 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:14.834 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.834 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.834 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.834 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.834 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.834 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.834 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.834 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.404 00:18:15.404 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.404 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.404 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.972 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.972 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.972 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.972 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.972 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.972 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.972 { 00:18:15.972 "cntlid": 11, 00:18:15.972 "qid": 0, 00:18:15.972 "state": "enabled", 00:18:15.972 "thread": "nvmf_tgt_poll_group_000", 00:18:15.972 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:15.972 "listen_address": { 00:18:15.972 "trtype": "TCP", 00:18:15.972 "adrfam": "IPv4", 00:18:15.972 "traddr": "10.0.0.2", 00:18:15.972 "trsvcid": "4420" 00:18:15.972 }, 00:18:15.972 "peer_address": { 00:18:15.972 "trtype": "TCP", 00:18:15.972 "adrfam": "IPv4", 00:18:15.972 "traddr": "10.0.0.1", 00:18:15.972 "trsvcid": "36392" 00:18:15.972 }, 00:18:15.972 "auth": { 00:18:15.972 "state": "completed", 00:18:15.972 "digest": "sha256", 00:18:15.972 "dhgroup": "ffdhe2048" 00:18:15.972 } 00:18:15.972 } 00:18:15.972 ]' 00:18:15.972 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.972 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:15.972 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.230 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:16.230 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.230 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.230 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.230 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.490 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:18:16.491 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:18:17.962 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.962 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:17.962 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.962 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.962 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.962 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.962 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:17.962 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:18.531 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:18:18.531 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.531 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:18.531 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:18.531 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:18.531 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.531 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.531 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.531 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.790 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.790 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.790 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.790 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.049 00:18:19.049 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.049 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.049 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.616 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.616 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.616 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.616 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.616 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.616 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.616 { 00:18:19.616 "cntlid": 13, 00:18:19.616 "qid": 0, 00:18:19.616 "state": "enabled", 00:18:19.616 "thread": "nvmf_tgt_poll_group_000", 00:18:19.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:19.616 "listen_address": { 00:18:19.616 "trtype": "TCP", 00:18:19.616 "adrfam": "IPv4", 00:18:19.616 "traddr": "10.0.0.2", 00:18:19.616 "trsvcid": "4420" 00:18:19.616 }, 00:18:19.616 "peer_address": { 00:18:19.616 "trtype": "TCP", 00:18:19.616 "adrfam": "IPv4", 00:18:19.616 "traddr": "10.0.0.1", 00:18:19.616 "trsvcid": "38772" 00:18:19.616 }, 00:18:19.616 "auth": { 00:18:19.616 "state": "completed", 00:18:19.616 "digest": "sha256", 00:18:19.616 "dhgroup": "ffdhe2048" 00:18:19.616 } 00:18:19.616 } 00:18:19.616 ]' 00:18:19.616 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.616 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:19.616 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.616 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:19.616 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.616 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.616 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.616 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.185 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:18:20.185 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:18:21.563 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.563 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:21.563 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.563 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.563 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.563 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.563 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:21.563 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:22.132 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:18:22.132 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.132 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:22.132 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:22.132 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:22.132 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.132 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:22.132 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.132 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.132 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.132 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:22.132 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:22.132 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:22.699 00:18:22.699 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.699 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.699 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.959 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.959 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.959 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.959 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.959 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.959 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.959 { 00:18:22.959 "cntlid": 15, 00:18:22.959 "qid": 0, 00:18:22.959 "state": "enabled", 00:18:22.959 "thread": "nvmf_tgt_poll_group_000", 00:18:22.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:22.959 "listen_address": { 00:18:22.959 "trtype": "TCP", 00:18:22.959 "adrfam": "IPv4", 00:18:22.959 "traddr": "10.0.0.2", 00:18:22.959 "trsvcid": "4420" 00:18:22.959 }, 00:18:22.959 "peer_address": { 00:18:22.959 "trtype": "TCP", 00:18:22.959 "adrfam": "IPv4", 00:18:22.959 "traddr": "10.0.0.1", 00:18:22.959 "trsvcid": "38808" 00:18:22.959 }, 00:18:22.959 "auth": { 00:18:22.959 "state": "completed", 00:18:22.959 "digest": "sha256", 00:18:22.959 "dhgroup": "ffdhe2048" 00:18:22.959 } 00:18:22.959 } 00:18:22.959 ]' 00:18:22.959 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.218 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:23.218 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.218 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:23.218 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.218 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.218 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.218 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.479 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:18:23.479 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:18:25.388 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.648 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:25.648 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.648 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.648 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.648 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:25.648 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.648 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:25.648 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:26.215 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:18:26.215 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.215 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:26.215 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:26.215 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:26.215 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.215 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.215 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.215 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.215 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.215 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.215 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.215 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.784 00:18:26.784 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.784 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.784 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.354 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.354 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.354 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.354 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.354 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.354 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.354 { 00:18:27.354 "cntlid": 17, 00:18:27.354 "qid": 0, 00:18:27.354 "state": "enabled", 00:18:27.354 "thread": "nvmf_tgt_poll_group_000", 00:18:27.354 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:27.354 "listen_address": { 00:18:27.354 "trtype": "TCP", 00:18:27.354 "adrfam": "IPv4", 00:18:27.354 "traddr": "10.0.0.2", 00:18:27.354 "trsvcid": "4420" 00:18:27.354 }, 00:18:27.354 "peer_address": { 00:18:27.354 "trtype": "TCP", 00:18:27.354 "adrfam": "IPv4", 00:18:27.354 "traddr": "10.0.0.1", 00:18:27.354 "trsvcid": "38836" 00:18:27.354 }, 00:18:27.354 "auth": { 00:18:27.354 "state": "completed", 00:18:27.354 "digest": "sha256", 00:18:27.354 "dhgroup": "ffdhe3072" 00:18:27.354 } 00:18:27.354 } 00:18:27.354 ]' 00:18:27.354 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.354 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:27.354 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.354 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:27.354 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.354 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.354 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.354 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.924 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:18:27.924 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:18:30.466 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.466 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:30.466 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.466 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.466 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.466 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:30.466 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:30.466 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:30.466 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:18:30.466 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.466 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:30.466 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:30.466 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:30.466 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.466 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.466 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.466 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.466 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.466 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.466 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.466 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.038 00:18:31.038 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.038 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.038 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.606 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.606 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.606 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.606 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.606 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.606 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.606 { 00:18:31.606 "cntlid": 19, 00:18:31.606 "qid": 0, 00:18:31.606 "state": "enabled", 00:18:31.606 "thread": "nvmf_tgt_poll_group_000", 00:18:31.606 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:31.606 "listen_address": { 00:18:31.606 "trtype": "TCP", 00:18:31.606 "adrfam": "IPv4", 00:18:31.606 "traddr": "10.0.0.2", 00:18:31.606 "trsvcid": "4420" 00:18:31.606 }, 00:18:31.606 "peer_address": { 00:18:31.606 "trtype": "TCP", 00:18:31.606 "adrfam": "IPv4", 00:18:31.606 "traddr": "10.0.0.1", 00:18:31.606 "trsvcid": "43374" 00:18:31.606 }, 00:18:31.606 "auth": { 00:18:31.606 "state": "completed", 00:18:31.606 "digest": "sha256", 00:18:31.606 "dhgroup": "ffdhe3072" 00:18:31.606 } 00:18:31.606 } 00:18:31.606 ]' 00:18:31.606 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.606 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:31.606 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.606 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:31.606 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.864 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.864 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.864 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.434 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:18:32.434 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:18:33.812 15:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.812 15:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:33.812 15:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.812 15:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.812 15:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.812 15:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:33.812 15:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:33.812 15:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:34.381 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:18:34.382 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.382 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:34.382 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:34.382 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:34.382 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.382 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.382 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.382 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.382 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.382 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.382 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.382 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.950 00:18:34.950 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.950 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.950 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.517 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.517 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.517 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.517 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.517 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.517 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.517 { 00:18:35.517 "cntlid": 21, 00:18:35.517 "qid": 0, 00:18:35.517 "state": "enabled", 00:18:35.517 "thread": "nvmf_tgt_poll_group_000", 00:18:35.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:35.517 "listen_address": { 00:18:35.517 "trtype": "TCP", 00:18:35.517 "adrfam": "IPv4", 00:18:35.517 "traddr": "10.0.0.2", 00:18:35.517 "trsvcid": "4420" 00:18:35.517 }, 00:18:35.517 "peer_address": { 00:18:35.517 "trtype": "TCP", 00:18:35.517 "adrfam": "IPv4", 00:18:35.517 "traddr": "10.0.0.1", 00:18:35.517 "trsvcid": "43392" 00:18:35.517 }, 00:18:35.517 "auth": { 00:18:35.517 "state": "completed", 00:18:35.517 "digest": "sha256", 00:18:35.517 "dhgroup": "ffdhe3072" 00:18:35.517 } 00:18:35.517 } 00:18:35.517 ]' 00:18:35.517 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.517 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:35.517 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.517 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:35.517 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.517 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.517 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.517 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.457 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:18:36.458 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:18:38.366 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.366 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:38.366 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.366 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.366 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.366 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:38.366 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:38.366 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:38.366 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:18:38.366 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:38.366 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:38.366 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:38.366 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:38.366 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.366 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:38.366 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.366 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.625 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.625 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:38.625 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:38.625 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:39.192 00:18:39.192 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.192 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.192 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.452 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.713 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.713 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.713 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.713 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.713 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:39.713 { 00:18:39.713 "cntlid": 23, 00:18:39.713 "qid": 0, 00:18:39.713 "state": "enabled", 00:18:39.713 "thread": "nvmf_tgt_poll_group_000", 00:18:39.713 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:39.713 "listen_address": { 00:18:39.713 "trtype": "TCP", 00:18:39.713 "adrfam": "IPv4", 00:18:39.713 "traddr": "10.0.0.2", 00:18:39.713 "trsvcid": "4420" 00:18:39.713 }, 00:18:39.713 "peer_address": { 00:18:39.713 "trtype": "TCP", 00:18:39.713 "adrfam": "IPv4", 00:18:39.713 "traddr": "10.0.0.1", 00:18:39.713 "trsvcid": "53990" 00:18:39.713 }, 00:18:39.713 "auth": { 00:18:39.713 "state": "completed", 00:18:39.713 "digest": "sha256", 00:18:39.713 "dhgroup": "ffdhe3072" 00:18:39.713 } 00:18:39.713 } 00:18:39.713 ]' 00:18:39.713 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:39.713 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:39.713 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:39.713 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:39.713 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:39.713 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.713 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.713 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.281 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:18:40.281 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:18:42.191 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.191 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:42.191 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.191 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.191 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.191 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:42.191 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:42.191 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:42.191 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:42.450 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:18:42.450 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.450 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:42.450 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:42.450 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:42.450 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.450 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.450 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.450 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.450 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.450 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.450 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.450 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.386 00:18:43.386 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.386 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.386 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.956 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.956 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.956 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.956 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.956 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.956 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.956 { 00:18:43.956 "cntlid": 25, 00:18:43.956 "qid": 0, 00:18:43.956 "state": "enabled", 00:18:43.956 "thread": "nvmf_tgt_poll_group_000", 00:18:43.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:43.956 "listen_address": { 00:18:43.956 "trtype": "TCP", 00:18:43.956 "adrfam": "IPv4", 00:18:43.956 "traddr": "10.0.0.2", 00:18:43.956 "trsvcid": "4420" 00:18:43.956 }, 00:18:43.956 "peer_address": { 00:18:43.957 "trtype": "TCP", 00:18:43.957 "adrfam": "IPv4", 00:18:43.957 "traddr": "10.0.0.1", 00:18:43.957 "trsvcid": "54012" 00:18:43.957 }, 00:18:43.957 "auth": { 00:18:43.957 "state": "completed", 00:18:43.957 "digest": "sha256", 00:18:43.957 "dhgroup": "ffdhe4096" 00:18:43.957 } 00:18:43.957 } 00:18:43.957 ]' 00:18:43.957 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.215 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:44.215 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.215 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:44.215 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.215 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.215 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.215 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.474 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:18:44.474 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:18:46.470 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.470 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:46.470 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.470 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.470 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.470 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:46.470 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:46.470 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:47.040 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:18:47.040 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.040 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:47.040 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:47.040 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:47.040 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.040 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.040 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.040 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.040 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.040 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.040 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.040 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.979 00:18:47.979 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:47.979 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:47.979 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.237 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.237 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.237 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.237 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.237 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.237 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.237 { 00:18:48.237 "cntlid": 27, 00:18:48.237 "qid": 0, 00:18:48.237 "state": "enabled", 00:18:48.237 "thread": "nvmf_tgt_poll_group_000", 00:18:48.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:48.237 "listen_address": { 00:18:48.237 "trtype": "TCP", 00:18:48.237 "adrfam": "IPv4", 00:18:48.237 "traddr": "10.0.0.2", 00:18:48.237 "trsvcid": "4420" 00:18:48.237 }, 00:18:48.237 "peer_address": { 00:18:48.237 "trtype": "TCP", 00:18:48.237 "adrfam": "IPv4", 00:18:48.237 "traddr": "10.0.0.1", 00:18:48.237 "trsvcid": "54036" 00:18:48.237 }, 00:18:48.237 "auth": { 00:18:48.237 "state": "completed", 00:18:48.237 "digest": "sha256", 00:18:48.237 "dhgroup": "ffdhe4096" 00:18:48.237 } 00:18:48.237 } 00:18:48.237 ]' 00:18:48.237 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.497 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:48.497 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.497 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:48.497 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.497 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.497 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.497 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.068 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:18:49.068 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:18:50.446 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.446 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:50.446 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.446 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.446 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.446 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:50.446 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:50.446 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:50.706 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:18:50.706 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.706 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:50.706 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:50.706 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:50.706 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.706 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.706 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.706 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.706 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.706 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.706 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.706 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.275 00:18:51.275 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:51.275 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.275 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:51.842 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.842 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.842 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.842 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.842 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.842 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:51.842 { 00:18:51.842 "cntlid": 29, 00:18:51.842 "qid": 0, 00:18:51.842 "state": "enabled", 00:18:51.842 "thread": "nvmf_tgt_poll_group_000", 00:18:51.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:51.842 "listen_address": { 00:18:51.842 "trtype": "TCP", 00:18:51.842 "adrfam": "IPv4", 00:18:51.842 "traddr": "10.0.0.2", 00:18:51.842 "trsvcid": "4420" 00:18:51.842 }, 00:18:51.842 "peer_address": { 00:18:51.842 "trtype": "TCP", 00:18:51.842 "adrfam": "IPv4", 00:18:51.842 "traddr": "10.0.0.1", 00:18:51.842 "trsvcid": "44506" 00:18:51.842 }, 00:18:51.842 "auth": { 00:18:51.842 "state": "completed", 00:18:51.842 "digest": "sha256", 00:18:51.842 "dhgroup": "ffdhe4096" 00:18:51.842 } 00:18:51.842 } 00:18:51.842 ]' 00:18:51.842 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:51.842 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:51.842 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:51.842 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:51.842 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.842 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.842 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.842 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.411 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:18:52.411 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:18:53.788 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.788 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:53.788 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.788 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.788 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.788 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:53.788 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:53.789 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:54.049 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:18:54.049 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.049 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:54.049 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:54.049 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:54.049 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.049 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:54.049 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.049 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.049 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.049 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:54.049 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.049 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.616 00:18:54.616 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.616 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.616 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.215 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.215 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.215 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.215 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.215 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.215 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:55.215 { 00:18:55.215 "cntlid": 31, 00:18:55.215 "qid": 0, 00:18:55.215 "state": "enabled", 00:18:55.215 "thread": "nvmf_tgt_poll_group_000", 00:18:55.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:55.215 "listen_address": { 00:18:55.215 "trtype": "TCP", 00:18:55.215 "adrfam": "IPv4", 00:18:55.215 "traddr": "10.0.0.2", 00:18:55.215 "trsvcid": "4420" 00:18:55.215 }, 00:18:55.215 "peer_address": { 00:18:55.215 "trtype": "TCP", 00:18:55.215 "adrfam": "IPv4", 00:18:55.215 "traddr": "10.0.0.1", 00:18:55.215 "trsvcid": "44524" 00:18:55.215 }, 00:18:55.215 "auth": { 00:18:55.215 "state": "completed", 00:18:55.215 "digest": "sha256", 00:18:55.215 "dhgroup": "ffdhe4096" 00:18:55.215 } 00:18:55.215 } 00:18:55.215 ]' 00:18:55.215 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.215 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:55.215 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.474 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:55.474 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.474 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.474 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.474 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.732 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:18:55.732 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:18:57.635 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.893 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:57.893 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.893 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.893 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.893 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:57.893 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.893 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:57.893 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:58.462 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:18:58.462 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.462 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:58.462 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:58.462 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:58.462 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.462 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.462 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.462 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.462 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.462 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.462 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.462 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.398 00:18:59.398 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.398 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.398 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.964 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.964 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.964 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.964 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.964 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.964 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.964 { 00:18:59.964 "cntlid": 33, 00:18:59.964 "qid": 0, 00:18:59.964 "state": "enabled", 00:18:59.964 "thread": "nvmf_tgt_poll_group_000", 00:18:59.964 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:59.964 "listen_address": { 00:18:59.964 "trtype": "TCP", 00:18:59.964 "adrfam": "IPv4", 00:18:59.964 "traddr": "10.0.0.2", 00:18:59.964 "trsvcid": "4420" 00:18:59.964 }, 00:18:59.964 "peer_address": { 00:18:59.964 "trtype": "TCP", 00:18:59.964 "adrfam": "IPv4", 00:18:59.964 "traddr": "10.0.0.1", 00:18:59.964 "trsvcid": "35024" 00:18:59.964 }, 00:18:59.964 "auth": { 00:18:59.964 "state": "completed", 00:18:59.964 "digest": "sha256", 00:18:59.964 "dhgroup": "ffdhe6144" 00:18:59.964 } 00:18:59.964 } 00:18:59.964 ]' 00:18:59.964 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:00.225 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:00.225 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:00.225 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:00.225 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:00.225 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.225 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.225 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.484 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:19:00.484 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:19:01.862 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.862 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:01.862 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.862 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.863 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.863 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:01.863 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:01.863 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:02.432 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:02.432 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.432 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:02.432 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:02.432 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:02.432 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.432 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.432 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.432 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.432 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.432 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.432 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.432 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.815 00:19:03.815 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.815 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.815 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.075 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.075 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.075 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.075 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.075 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.075 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.075 { 00:19:04.075 "cntlid": 35, 00:19:04.075 "qid": 0, 00:19:04.075 "state": "enabled", 00:19:04.075 "thread": "nvmf_tgt_poll_group_000", 00:19:04.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:04.075 "listen_address": { 00:19:04.075 "trtype": "TCP", 00:19:04.075 "adrfam": "IPv4", 00:19:04.075 "traddr": "10.0.0.2", 00:19:04.075 "trsvcid": "4420" 00:19:04.075 }, 00:19:04.075 "peer_address": { 00:19:04.075 "trtype": "TCP", 00:19:04.075 "adrfam": "IPv4", 00:19:04.075 "traddr": "10.0.0.1", 00:19:04.075 "trsvcid": "35052" 00:19:04.075 }, 00:19:04.075 "auth": { 00:19:04.075 "state": "completed", 00:19:04.075 "digest": "sha256", 00:19:04.075 "dhgroup": "ffdhe6144" 00:19:04.075 } 00:19:04.075 } 00:19:04.075 ]' 00:19:04.075 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.075 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:04.075 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.075 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:04.075 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.075 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.075 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.075 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.642 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:19:04.642 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:19:06.548 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.548 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:06.549 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.549 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.549 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.549 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:06.549 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:06.549 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:07.117 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:07.117 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.117 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:07.117 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:07.117 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:07.117 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.117 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.117 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.117 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.117 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.117 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.117 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.117 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.054 00:19:08.054 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.054 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.054 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.623 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.623 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.623 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.623 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.623 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.623 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.623 { 00:19:08.623 "cntlid": 37, 00:19:08.623 "qid": 0, 00:19:08.623 "state": "enabled", 00:19:08.623 "thread": "nvmf_tgt_poll_group_000", 00:19:08.623 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:08.623 "listen_address": { 00:19:08.623 "trtype": "TCP", 00:19:08.623 "adrfam": "IPv4", 00:19:08.623 "traddr": "10.0.0.2", 00:19:08.623 "trsvcid": "4420" 00:19:08.623 }, 00:19:08.623 "peer_address": { 00:19:08.623 "trtype": "TCP", 00:19:08.623 "adrfam": "IPv4", 00:19:08.623 "traddr": "10.0.0.1", 00:19:08.623 "trsvcid": "35072" 00:19:08.623 }, 00:19:08.623 "auth": { 00:19:08.623 "state": "completed", 00:19:08.623 "digest": "sha256", 00:19:08.623 "dhgroup": "ffdhe6144" 00:19:08.623 } 00:19:08.623 } 00:19:08.623 ]' 00:19:08.623 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.623 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.623 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.623 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:08.623 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.623 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.623 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.623 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.192 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:19:09.192 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:19:11.098 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.098 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:11.098 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.098 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.098 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.098 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:11.098 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:11.098 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:11.668 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:11.668 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.668 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:11.668 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:11.668 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:11.668 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.668 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:11.668 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.668 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.668 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.668 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:11.668 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:11.668 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:13.047 00:19:13.047 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.047 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.047 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.614 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.615 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.615 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.615 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.615 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.615 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.615 { 00:19:13.615 "cntlid": 39, 00:19:13.615 "qid": 0, 00:19:13.615 "state": "enabled", 00:19:13.615 "thread": "nvmf_tgt_poll_group_000", 00:19:13.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:13.615 "listen_address": { 00:19:13.615 "trtype": "TCP", 00:19:13.615 "adrfam": "IPv4", 00:19:13.615 "traddr": "10.0.0.2", 00:19:13.615 "trsvcid": "4420" 00:19:13.615 }, 00:19:13.615 "peer_address": { 00:19:13.615 "trtype": "TCP", 00:19:13.615 "adrfam": "IPv4", 00:19:13.615 "traddr": "10.0.0.1", 00:19:13.615 "trsvcid": "37282" 00:19:13.615 }, 00:19:13.615 "auth": { 00:19:13.615 "state": "completed", 00:19:13.615 "digest": "sha256", 00:19:13.615 "dhgroup": "ffdhe6144" 00:19:13.615 } 00:19:13.615 } 00:19:13.615 ]' 00:19:13.615 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.615 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.615 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.615 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:13.615 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.615 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.615 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.615 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.228 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:19:14.228 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:19:16.140 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.141 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:16.141 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.141 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.141 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.141 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:16.141 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.141 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:16.141 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:16.141 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:16.141 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.141 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:16.141 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:16.141 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:16.141 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.141 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.141 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.141 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.141 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.141 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.141 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.141 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.050 00:19:18.050 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:18.050 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:18.050 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.309 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.309 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.309 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.309 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.309 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.309 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:18.309 { 00:19:18.309 "cntlid": 41, 00:19:18.309 "qid": 0, 00:19:18.309 "state": "enabled", 00:19:18.309 "thread": "nvmf_tgt_poll_group_000", 00:19:18.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:18.309 "listen_address": { 00:19:18.309 "trtype": "TCP", 00:19:18.309 "adrfam": "IPv4", 00:19:18.309 "traddr": "10.0.0.2", 00:19:18.309 "trsvcid": "4420" 00:19:18.309 }, 00:19:18.309 "peer_address": { 00:19:18.309 "trtype": "TCP", 00:19:18.309 "adrfam": "IPv4", 00:19:18.309 "traddr": "10.0.0.1", 00:19:18.309 "trsvcid": "37312" 00:19:18.309 }, 00:19:18.309 "auth": { 00:19:18.309 "state": "completed", 00:19:18.309 "digest": "sha256", 00:19:18.309 "dhgroup": "ffdhe8192" 00:19:18.309 } 00:19:18.309 } 00:19:18.309 ]' 00:19:18.309 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.309 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.309 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.309 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:18.309 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:18.309 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.309 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.309 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.247 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:19:19.248 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:19:21.153 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.153 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:21.153 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.153 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.153 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.153 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:21.153 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:21.153 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:21.153 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:19:21.153 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:21.153 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:21.153 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:21.153 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:21.153 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.153 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.153 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.153 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.153 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.153 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.153 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.153 15:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.059 00:19:23.059 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:23.059 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.059 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:23.059 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.059 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.059 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.059 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.059 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.059 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.060 { 00:19:23.060 "cntlid": 43, 00:19:23.060 "qid": 0, 00:19:23.060 "state": "enabled", 00:19:23.060 "thread": "nvmf_tgt_poll_group_000", 00:19:23.060 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:23.060 "listen_address": { 00:19:23.060 "trtype": "TCP", 00:19:23.060 "adrfam": "IPv4", 00:19:23.060 "traddr": "10.0.0.2", 00:19:23.060 "trsvcid": "4420" 00:19:23.060 }, 00:19:23.060 "peer_address": { 00:19:23.060 "trtype": "TCP", 00:19:23.060 "adrfam": "IPv4", 00:19:23.060 "traddr": "10.0.0.1", 00:19:23.060 "trsvcid": "52322" 00:19:23.060 }, 00:19:23.060 "auth": { 00:19:23.060 "state": "completed", 00:19:23.060 "digest": "sha256", 00:19:23.060 "dhgroup": "ffdhe8192" 00:19:23.060 } 00:19:23.060 } 00:19:23.060 ]' 00:19:23.060 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.060 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.060 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.060 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:23.060 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.318 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.318 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.318 15:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.577 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:19:23.577 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:19:25.483 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.483 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:25.483 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.483 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.483 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.483 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.483 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:25.483 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:26.420 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:26.420 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.420 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:26.420 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:26.420 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:26.420 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.420 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.420 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.420 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.420 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.420 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.420 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.420 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.797 00:19:28.056 15:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.056 15:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.056 15:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.316 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.316 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.316 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.316 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.316 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.316 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.316 { 00:19:28.316 "cntlid": 45, 00:19:28.316 "qid": 0, 00:19:28.316 "state": "enabled", 00:19:28.316 "thread": "nvmf_tgt_poll_group_000", 00:19:28.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:28.316 "listen_address": { 00:19:28.316 "trtype": "TCP", 00:19:28.316 "adrfam": "IPv4", 00:19:28.316 "traddr": "10.0.0.2", 00:19:28.316 "trsvcid": "4420" 00:19:28.316 }, 00:19:28.316 "peer_address": { 00:19:28.316 "trtype": "TCP", 00:19:28.316 "adrfam": "IPv4", 00:19:28.316 "traddr": "10.0.0.1", 00:19:28.316 "trsvcid": "52336" 00:19:28.316 }, 00:19:28.316 "auth": { 00:19:28.316 "state": "completed", 00:19:28.316 "digest": "sha256", 00:19:28.316 "dhgroup": "ffdhe8192" 00:19:28.316 } 00:19:28.316 } 00:19:28.316 ]' 00:19:28.316 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.316 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.316 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.316 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:28.316 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.577 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.577 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.577 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.146 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:19:29.146 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:19:30.527 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.528 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:30.528 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.528 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.528 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.528 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:30.528 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:30.528 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:31.469 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:31.469 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:31.469 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:31.469 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:31.469 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:31.469 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.469 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:31.469 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.469 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.469 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.469 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:31.469 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:31.469 15:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:32.849 00:19:32.849 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:32.849 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:32.849 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.419 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.419 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.419 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.419 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.419 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.419 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.419 { 00:19:33.419 "cntlid": 47, 00:19:33.419 "qid": 0, 00:19:33.419 "state": "enabled", 00:19:33.419 "thread": "nvmf_tgt_poll_group_000", 00:19:33.419 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:33.419 "listen_address": { 00:19:33.419 "trtype": "TCP", 00:19:33.419 "adrfam": "IPv4", 00:19:33.419 "traddr": "10.0.0.2", 00:19:33.419 "trsvcid": "4420" 00:19:33.419 }, 00:19:33.419 "peer_address": { 00:19:33.419 "trtype": "TCP", 00:19:33.419 "adrfam": "IPv4", 00:19:33.419 "traddr": "10.0.0.1", 00:19:33.419 "trsvcid": "52742" 00:19:33.419 }, 00:19:33.419 "auth": { 00:19:33.419 "state": "completed", 00:19:33.419 "digest": "sha256", 00:19:33.419 "dhgroup": "ffdhe8192" 00:19:33.419 } 00:19:33.419 } 00:19:33.419 ]' 00:19:33.419 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.419 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.419 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.419 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:33.419 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.419 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.419 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.419 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.355 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:19:34.355 15:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:19:35.737 15:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.737 15:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:35.737 15:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.737 15:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.737 15:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.737 15:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:35.737 15:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:35.737 15:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.737 15:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:35.737 15:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:36.306 15:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:19:36.306 15:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.306 15:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:36.306 15:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:36.306 15:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:36.306 15:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.306 15:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.306 15:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.306 15:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.306 15:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.306 15:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.306 15:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.306 15:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.877 00:19:36.877 15:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:36.877 15:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:36.877 15:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.137 15:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.137 15:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.137 15:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.137 15:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.137 15:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.137 15:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.137 { 00:19:37.137 "cntlid": 49, 00:19:37.137 "qid": 0, 00:19:37.137 "state": "enabled", 00:19:37.137 "thread": "nvmf_tgt_poll_group_000", 00:19:37.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:37.137 "listen_address": { 00:19:37.137 "trtype": "TCP", 00:19:37.137 "adrfam": "IPv4", 00:19:37.137 "traddr": "10.0.0.2", 00:19:37.137 "trsvcid": "4420" 00:19:37.137 }, 00:19:37.137 "peer_address": { 00:19:37.137 "trtype": "TCP", 00:19:37.137 "adrfam": "IPv4", 00:19:37.137 "traddr": "10.0.0.1", 00:19:37.137 "trsvcid": "52772" 00:19:37.137 }, 00:19:37.137 "auth": { 00:19:37.137 "state": "completed", 00:19:37.137 "digest": "sha384", 00:19:37.137 "dhgroup": "null" 00:19:37.137 } 00:19:37.137 } 00:19:37.137 ]' 00:19:37.137 15:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.397 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:37.397 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.397 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:37.397 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.397 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.397 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.397 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.655 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:19:37.656 15:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:19:39.561 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.561 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:39.561 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.561 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.561 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.561 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.561 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:39.561 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:39.821 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:19:39.821 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.821 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:39.821 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:39.821 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:39.821 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.821 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.821 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.821 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.821 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.821 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.821 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.821 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.758 00:19:40.758 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.758 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.758 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.018 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.018 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.018 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.018 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.018 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.018 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:41.018 { 00:19:41.018 "cntlid": 51, 00:19:41.018 "qid": 0, 00:19:41.018 "state": "enabled", 00:19:41.018 "thread": "nvmf_tgt_poll_group_000", 00:19:41.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:41.018 "listen_address": { 00:19:41.018 "trtype": "TCP", 00:19:41.018 "adrfam": "IPv4", 00:19:41.018 "traddr": "10.0.0.2", 00:19:41.018 "trsvcid": "4420" 00:19:41.018 }, 00:19:41.018 "peer_address": { 00:19:41.018 "trtype": "TCP", 00:19:41.018 "adrfam": "IPv4", 00:19:41.018 "traddr": "10.0.0.1", 00:19:41.018 "trsvcid": "48286" 00:19:41.018 }, 00:19:41.018 "auth": { 00:19:41.018 "state": "completed", 00:19:41.018 "digest": "sha384", 00:19:41.018 "dhgroup": "null" 00:19:41.018 } 00:19:41.018 } 00:19:41.018 ]' 00:19:41.018 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:41.018 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:41.018 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:41.018 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:41.018 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:41.018 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.018 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.018 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.586 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:19:41.586 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:19:43.491 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.491 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:43.491 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.491 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.491 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.491 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.491 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:43.491 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:43.770 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:19:43.770 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.770 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:43.770 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:43.770 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:43.770 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.770 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.770 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.770 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.770 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.770 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.770 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.770 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.387 00:19:44.387 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.387 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.387 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.957 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.957 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.957 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.957 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.957 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.957 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.957 { 00:19:44.957 "cntlid": 53, 00:19:44.957 "qid": 0, 00:19:44.957 "state": "enabled", 00:19:44.957 "thread": "nvmf_tgt_poll_group_000", 00:19:44.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:44.957 "listen_address": { 00:19:44.957 "trtype": "TCP", 00:19:44.957 "adrfam": "IPv4", 00:19:44.957 "traddr": "10.0.0.2", 00:19:44.957 "trsvcid": "4420" 00:19:44.957 }, 00:19:44.957 "peer_address": { 00:19:44.957 "trtype": "TCP", 00:19:44.957 "adrfam": "IPv4", 00:19:44.957 "traddr": "10.0.0.1", 00:19:44.957 "trsvcid": "48312" 00:19:44.957 }, 00:19:44.957 "auth": { 00:19:44.957 "state": "completed", 00:19:44.957 "digest": "sha384", 00:19:44.957 "dhgroup": "null" 00:19:44.957 } 00:19:44.957 } 00:19:44.957 ]' 00:19:44.957 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.957 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:44.957 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.957 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:44.957 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.957 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.957 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.958 15:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.529 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:19:45.529 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:19:47.437 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.437 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:47.437 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.437 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.437 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.437 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:47.437 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:47.437 15:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:47.437 15:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:19:47.437 15:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.437 15:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:47.437 15:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:47.437 15:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:47.437 15:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.437 15:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:47.437 15:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.437 15:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.437 15:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.437 15:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:47.437 15:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:47.437 15:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:48.005 00:19:48.005 15:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.005 15:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.005 15:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.571 15:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.571 15:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.572 15:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.572 15:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.572 15:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.572 15:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.572 { 00:19:48.572 "cntlid": 55, 00:19:48.572 "qid": 0, 00:19:48.572 "state": "enabled", 00:19:48.572 "thread": "nvmf_tgt_poll_group_000", 00:19:48.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:48.572 "listen_address": { 00:19:48.572 "trtype": "TCP", 00:19:48.572 "adrfam": "IPv4", 00:19:48.572 "traddr": "10.0.0.2", 00:19:48.572 "trsvcid": "4420" 00:19:48.572 }, 00:19:48.572 "peer_address": { 00:19:48.572 "trtype": "TCP", 00:19:48.572 "adrfam": "IPv4", 00:19:48.572 "traddr": "10.0.0.1", 00:19:48.572 "trsvcid": "40202" 00:19:48.572 }, 00:19:48.572 "auth": { 00:19:48.572 "state": "completed", 00:19:48.572 "digest": "sha384", 00:19:48.572 "dhgroup": "null" 00:19:48.572 } 00:19:48.572 } 00:19:48.572 ]' 00:19:48.572 15:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.572 15:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:48.572 15:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.572 15:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:48.572 15:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.572 15:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.572 15:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.572 15:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.141 15:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:19:49.141 15:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:19:51.049 15:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.049 15:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:51.049 15:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.049 15:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.049 15:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.049 15:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:51.049 15:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.049 15:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:51.049 15:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:51.655 15:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:19:51.655 15:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.655 15:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:51.655 15:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:51.655 15:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:51.656 15:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.656 15:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.656 15:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.656 15:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.656 15:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.656 15:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.656 15:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.656 15:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.226 00:19:52.226 15:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.226 15:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.226 15:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.796 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.796 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.796 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.796 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.796 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.796 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.796 { 00:19:52.796 "cntlid": 57, 00:19:52.796 "qid": 0, 00:19:52.796 "state": "enabled", 00:19:52.796 "thread": "nvmf_tgt_poll_group_000", 00:19:52.796 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:52.796 "listen_address": { 00:19:52.796 "trtype": "TCP", 00:19:52.796 "adrfam": "IPv4", 00:19:52.796 "traddr": "10.0.0.2", 00:19:52.796 "trsvcid": "4420" 00:19:52.796 }, 00:19:52.796 "peer_address": { 00:19:52.796 "trtype": "TCP", 00:19:52.796 "adrfam": "IPv4", 00:19:52.796 "traddr": "10.0.0.1", 00:19:52.796 "trsvcid": "40234" 00:19:52.796 }, 00:19:52.796 "auth": { 00:19:52.796 "state": "completed", 00:19:52.796 "digest": "sha384", 00:19:52.796 "dhgroup": "ffdhe2048" 00:19:52.796 } 00:19:52.796 } 00:19:52.796 ]' 00:19:52.796 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.796 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:52.796 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.796 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:52.796 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.796 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.796 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.796 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.365 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:19:53.365 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:19:55.273 15:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.273 15:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:55.273 15:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.273 15:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.273 15:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.273 15:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.273 15:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:55.273 15:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:55.533 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:19:55.533 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.533 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:55.533 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:55.533 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:55.533 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.533 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.533 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.533 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.533 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.533 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.533 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.533 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.101 00:19:56.101 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.101 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.101 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.040 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.040 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.040 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.040 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.040 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.040 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.040 { 00:19:57.040 "cntlid": 59, 00:19:57.040 "qid": 0, 00:19:57.040 "state": "enabled", 00:19:57.040 "thread": "nvmf_tgt_poll_group_000", 00:19:57.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:57.040 "listen_address": { 00:19:57.040 "trtype": "TCP", 00:19:57.040 "adrfam": "IPv4", 00:19:57.040 "traddr": "10.0.0.2", 00:19:57.040 "trsvcid": "4420" 00:19:57.040 }, 00:19:57.040 "peer_address": { 00:19:57.040 "trtype": "TCP", 00:19:57.040 "adrfam": "IPv4", 00:19:57.040 "traddr": "10.0.0.1", 00:19:57.040 "trsvcid": "40268" 00:19:57.040 }, 00:19:57.040 "auth": { 00:19:57.040 "state": "completed", 00:19:57.040 "digest": "sha384", 00:19:57.040 "dhgroup": "ffdhe2048" 00:19:57.040 } 00:19:57.040 } 00:19:57.040 ]' 00:19:57.040 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.040 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:57.040 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.040 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:57.040 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.040 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.040 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.040 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.609 15:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:19:57.609 15:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:19:58.986 15:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.986 15:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:58.986 15:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.986 15:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.986 15:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.986 15:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.986 15:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:58.986 15:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:59.554 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:19:59.554 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.554 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:59.554 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:59.554 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:59.554 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.554 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.554 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.554 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.554 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.554 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.554 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.554 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.122 00:20:00.122 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.122 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.122 15:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.060 15:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.060 15:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.060 15:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.060 15:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.060 15:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.060 15:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.060 { 00:20:01.060 "cntlid": 61, 00:20:01.060 "qid": 0, 00:20:01.060 "state": "enabled", 00:20:01.061 "thread": "nvmf_tgt_poll_group_000", 00:20:01.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:01.061 "listen_address": { 00:20:01.061 "trtype": "TCP", 00:20:01.061 "adrfam": "IPv4", 00:20:01.061 "traddr": "10.0.0.2", 00:20:01.061 "trsvcid": "4420" 00:20:01.061 }, 00:20:01.061 "peer_address": { 00:20:01.061 "trtype": "TCP", 00:20:01.061 "adrfam": "IPv4", 00:20:01.061 "traddr": "10.0.0.1", 00:20:01.061 "trsvcid": "59084" 00:20:01.061 }, 00:20:01.061 "auth": { 00:20:01.061 "state": "completed", 00:20:01.061 "digest": "sha384", 00:20:01.061 "dhgroup": "ffdhe2048" 00:20:01.061 } 00:20:01.061 } 00:20:01.061 ]' 00:20:01.061 15:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.061 15:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:01.061 15:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.061 15:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:01.061 15:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.061 15:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.061 15:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.061 15:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.629 15:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:20:01.629 15:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:20:03.532 15:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.532 15:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:03.532 15:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.532 15:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.532 15:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.532 15:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.532 15:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:03.533 15:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:04.102 15:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:04.102 15:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.102 15:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:04.102 15:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:04.102 15:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:04.102 15:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.102 15:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:04.102 15:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.102 15:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.102 15:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.102 15:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:04.102 15:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:04.102 15:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:04.669 00:20:04.669 15:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.669 15:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.669 15:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.237 15:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.237 15:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.237 15:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.238 15:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.238 15:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.238 15:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.238 { 00:20:05.238 "cntlid": 63, 00:20:05.238 "qid": 0, 00:20:05.238 "state": "enabled", 00:20:05.238 "thread": "nvmf_tgt_poll_group_000", 00:20:05.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:05.238 "listen_address": { 00:20:05.238 "trtype": "TCP", 00:20:05.238 "adrfam": "IPv4", 00:20:05.238 "traddr": "10.0.0.2", 00:20:05.238 "trsvcid": "4420" 00:20:05.238 }, 00:20:05.238 "peer_address": { 00:20:05.238 "trtype": "TCP", 00:20:05.238 "adrfam": "IPv4", 00:20:05.238 "traddr": "10.0.0.1", 00:20:05.238 "trsvcid": "59106" 00:20:05.238 }, 00:20:05.238 "auth": { 00:20:05.238 "state": "completed", 00:20:05.238 "digest": "sha384", 00:20:05.238 "dhgroup": "ffdhe2048" 00:20:05.238 } 00:20:05.238 } 00:20:05.238 ]' 00:20:05.238 15:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.238 15:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.238 15:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.238 15:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:05.238 15:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.238 15:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.238 15:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.238 15:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.806 15:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:20:05.806 15:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:20:07.713 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.713 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:07.713 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.713 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.713 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.713 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:07.713 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.713 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:07.713 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:07.713 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:07.713 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.713 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:07.713 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:07.713 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:07.713 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.713 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.713 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.713 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.713 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.713 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.713 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.713 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.282 00:20:08.282 15:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.282 15:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.282 15:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.850 15:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.850 15:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.850 15:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.850 15:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.850 15:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.850 15:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.850 { 00:20:08.850 "cntlid": 65, 00:20:08.850 "qid": 0, 00:20:08.850 "state": "enabled", 00:20:08.850 "thread": "nvmf_tgt_poll_group_000", 00:20:08.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:08.850 "listen_address": { 00:20:08.850 "trtype": "TCP", 00:20:08.850 "adrfam": "IPv4", 00:20:08.850 "traddr": "10.0.0.2", 00:20:08.850 "trsvcid": "4420" 00:20:08.850 }, 00:20:08.850 "peer_address": { 00:20:08.850 "trtype": "TCP", 00:20:08.850 "adrfam": "IPv4", 00:20:08.850 "traddr": "10.0.0.1", 00:20:08.850 "trsvcid": "46684" 00:20:08.850 }, 00:20:08.850 "auth": { 00:20:08.850 "state": "completed", 00:20:08.850 "digest": "sha384", 00:20:08.850 "dhgroup": "ffdhe3072" 00:20:08.850 } 00:20:08.850 } 00:20:08.850 ]' 00:20:08.850 15:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.850 15:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.850 15:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.109 15:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:09.109 15:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.109 15:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.109 15:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.109 15:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.679 15:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:20:09.679 15:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:20:11.062 15:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.062 15:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:11.062 15:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.062 15:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.323 15:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.323 15:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.323 15:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:11.323 15:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:11.583 15:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:11.583 15:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.583 15:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:11.583 15:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:11.583 15:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:11.583 15:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.583 15:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.583 15:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.583 15:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.843 15:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.843 15:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.843 15:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.843 15:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.105 00:20:12.407 15:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.407 15:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.407 15:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.671 15:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.671 15:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.671 15:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.671 15:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.930 15:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.930 15:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.930 { 00:20:12.930 "cntlid": 67, 00:20:12.930 "qid": 0, 00:20:12.930 "state": "enabled", 00:20:12.930 "thread": "nvmf_tgt_poll_group_000", 00:20:12.930 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:12.930 "listen_address": { 00:20:12.930 "trtype": "TCP", 00:20:12.930 "adrfam": "IPv4", 00:20:12.930 "traddr": "10.0.0.2", 00:20:12.930 "trsvcid": "4420" 00:20:12.930 }, 00:20:12.930 "peer_address": { 00:20:12.930 "trtype": "TCP", 00:20:12.930 "adrfam": "IPv4", 00:20:12.930 "traddr": "10.0.0.1", 00:20:12.930 "trsvcid": "46706" 00:20:12.930 }, 00:20:12.930 "auth": { 00:20:12.930 "state": "completed", 00:20:12.930 "digest": "sha384", 00:20:12.930 "dhgroup": "ffdhe3072" 00:20:12.930 } 00:20:12.930 } 00:20:12.930 ]' 00:20:12.930 15:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.930 15:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.930 15:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.930 15:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:12.930 15:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.930 15:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.930 15:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.930 15:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.501 15:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:20:13.501 15:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:20:15.403 15:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.403 15:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:15.403 15:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.403 15:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.403 15:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.403 15:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.403 15:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:15.403 15:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:16.339 15:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:16.339 15:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.339 15:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:16.339 15:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:16.339 15:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:16.339 15:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.339 15:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.339 15:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.339 15:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.339 15:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.339 15:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.339 15:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.339 15:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.598 00:20:16.598 15:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.598 15:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.598 15:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.165 15:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.165 15:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.165 15:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.165 15:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.423 15:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.423 15:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.423 { 00:20:17.423 "cntlid": 69, 00:20:17.423 "qid": 0, 00:20:17.423 "state": "enabled", 00:20:17.423 "thread": "nvmf_tgt_poll_group_000", 00:20:17.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:17.423 "listen_address": { 00:20:17.423 "trtype": "TCP", 00:20:17.423 "adrfam": "IPv4", 00:20:17.423 "traddr": "10.0.0.2", 00:20:17.423 "trsvcid": "4420" 00:20:17.423 }, 00:20:17.423 "peer_address": { 00:20:17.423 "trtype": "TCP", 00:20:17.423 "adrfam": "IPv4", 00:20:17.423 "traddr": "10.0.0.1", 00:20:17.423 "trsvcid": "46730" 00:20:17.423 }, 00:20:17.423 "auth": { 00:20:17.423 "state": "completed", 00:20:17.423 "digest": "sha384", 00:20:17.423 "dhgroup": "ffdhe3072" 00:20:17.423 } 00:20:17.423 } 00:20:17.423 ]' 00:20:17.423 15:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.423 15:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.423 15:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.423 15:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:17.423 15:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.423 15:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.423 15:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.423 15:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.992 15:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:20:17.992 15:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:20:19.894 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.894 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:19.894 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.894 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.894 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.894 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.894 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:19.894 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:20.153 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:20.153 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.153 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:20.153 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:20.153 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:20.153 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.153 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:20.153 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.153 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.153 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.153 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:20.153 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:20.153 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:20.721 00:20:20.721 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.721 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.721 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.291 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.291 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.291 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.291 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.291 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.291 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.291 { 00:20:21.291 "cntlid": 71, 00:20:21.291 "qid": 0, 00:20:21.291 "state": "enabled", 00:20:21.291 "thread": "nvmf_tgt_poll_group_000", 00:20:21.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:21.291 "listen_address": { 00:20:21.291 "trtype": "TCP", 00:20:21.291 "adrfam": "IPv4", 00:20:21.291 "traddr": "10.0.0.2", 00:20:21.291 "trsvcid": "4420" 00:20:21.291 }, 00:20:21.291 "peer_address": { 00:20:21.291 "trtype": "TCP", 00:20:21.291 "adrfam": "IPv4", 00:20:21.291 "traddr": "10.0.0.1", 00:20:21.291 "trsvcid": "54376" 00:20:21.291 }, 00:20:21.291 "auth": { 00:20:21.291 "state": "completed", 00:20:21.291 "digest": "sha384", 00:20:21.291 "dhgroup": "ffdhe3072" 00:20:21.291 } 00:20:21.291 } 00:20:21.291 ]' 00:20:21.291 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.291 15:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.291 15:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.291 15:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:21.291 15:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.551 15:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.552 15:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.552 15:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.811 15:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:20:21.812 15:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:20:23.727 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.727 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:23.727 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.727 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.727 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.727 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:23.727 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.727 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:23.727 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:23.988 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:23.988 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.988 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:23.988 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:23.988 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:23.988 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.988 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.988 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.988 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.988 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.988 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.988 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.988 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.560 00:20:24.560 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.560 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.560 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.129 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.129 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.129 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.129 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.129 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.129 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.129 { 00:20:25.129 "cntlid": 73, 00:20:25.129 "qid": 0, 00:20:25.129 "state": "enabled", 00:20:25.129 "thread": "nvmf_tgt_poll_group_000", 00:20:25.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:25.129 "listen_address": { 00:20:25.129 "trtype": "TCP", 00:20:25.129 "adrfam": "IPv4", 00:20:25.129 "traddr": "10.0.0.2", 00:20:25.129 "trsvcid": "4420" 00:20:25.129 }, 00:20:25.129 "peer_address": { 00:20:25.129 "trtype": "TCP", 00:20:25.129 "adrfam": "IPv4", 00:20:25.129 "traddr": "10.0.0.1", 00:20:25.129 "trsvcid": "54406" 00:20:25.129 }, 00:20:25.129 "auth": { 00:20:25.129 "state": "completed", 00:20:25.129 "digest": "sha384", 00:20:25.129 "dhgroup": "ffdhe4096" 00:20:25.129 } 00:20:25.129 } 00:20:25.129 ]' 00:20:25.129 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.129 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.129 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.129 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:25.129 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.129 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.129 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.129 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.698 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:20:25.698 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:20:27.605 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.605 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:27.605 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.605 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.605 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.605 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.605 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:27.605 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:27.866 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:27.866 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.866 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:27.866 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:27.866 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:27.866 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.866 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.866 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.866 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.866 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.866 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.866 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.866 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.805 00:20:28.805 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.805 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.805 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.064 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.064 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.064 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.064 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.064 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.064 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.064 { 00:20:29.064 "cntlid": 75, 00:20:29.064 "qid": 0, 00:20:29.064 "state": "enabled", 00:20:29.065 "thread": "nvmf_tgt_poll_group_000", 00:20:29.065 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:29.065 "listen_address": { 00:20:29.065 "trtype": "TCP", 00:20:29.065 "adrfam": "IPv4", 00:20:29.065 "traddr": "10.0.0.2", 00:20:29.065 "trsvcid": "4420" 00:20:29.065 }, 00:20:29.065 "peer_address": { 00:20:29.065 "trtype": "TCP", 00:20:29.065 "adrfam": "IPv4", 00:20:29.065 "traddr": "10.0.0.1", 00:20:29.065 "trsvcid": "55060" 00:20:29.065 }, 00:20:29.065 "auth": { 00:20:29.065 "state": "completed", 00:20:29.065 "digest": "sha384", 00:20:29.065 "dhgroup": "ffdhe4096" 00:20:29.065 } 00:20:29.065 } 00:20:29.065 ]' 00:20:29.065 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.065 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.065 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.065 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:29.065 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.325 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.325 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.325 15:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.891 15:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:20:29.891 15:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:20:31.800 15:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.800 15:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:31.800 15:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.800 15:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.800 15:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.800 15:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.800 15:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:31.800 15:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:32.058 15:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:32.058 15:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.058 15:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:32.058 15:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:32.058 15:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:32.058 15:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.058 15:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.058 15:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.058 15:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.058 15:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.058 15:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.058 15:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.058 15:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.628 00:20:32.888 15:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.888 15:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.888 15:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.147 15:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.147 15:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.147 15:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.147 15:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.147 15:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.147 15:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.147 { 00:20:33.147 "cntlid": 77, 00:20:33.147 "qid": 0, 00:20:33.147 "state": "enabled", 00:20:33.147 "thread": "nvmf_tgt_poll_group_000", 00:20:33.147 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:33.147 "listen_address": { 00:20:33.147 "trtype": "TCP", 00:20:33.147 "adrfam": "IPv4", 00:20:33.147 "traddr": "10.0.0.2", 00:20:33.147 "trsvcid": "4420" 00:20:33.147 }, 00:20:33.147 "peer_address": { 00:20:33.147 "trtype": "TCP", 00:20:33.147 "adrfam": "IPv4", 00:20:33.147 "traddr": "10.0.0.1", 00:20:33.147 "trsvcid": "55086" 00:20:33.147 }, 00:20:33.147 "auth": { 00:20:33.147 "state": "completed", 00:20:33.147 "digest": "sha384", 00:20:33.147 "dhgroup": "ffdhe4096" 00:20:33.147 } 00:20:33.147 } 00:20:33.147 ]' 00:20:33.147 15:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.147 15:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.147 15:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.147 15:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:33.147 15:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.405 15:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.405 15:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.405 15:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.664 15:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:20:33.664 15:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:20:35.046 15:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.046 15:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:35.046 15:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.046 15:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.046 15:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.046 15:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.046 15:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:35.046 15:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:35.306 15:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:35.306 15:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.306 15:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:35.306 15:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:35.306 15:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:35.306 15:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.306 15:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:35.306 15:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.306 15:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.306 15:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.306 15:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:35.306 15:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:35.306 15:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:35.874 00:20:36.135 15:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.135 15:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.135 15:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.705 15:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.705 15:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.705 15:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.705 15:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.705 15:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.705 15:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.705 { 00:20:36.705 "cntlid": 79, 00:20:36.705 "qid": 0, 00:20:36.705 "state": "enabled", 00:20:36.705 "thread": "nvmf_tgt_poll_group_000", 00:20:36.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:36.705 "listen_address": { 00:20:36.705 "trtype": "TCP", 00:20:36.705 "adrfam": "IPv4", 00:20:36.705 "traddr": "10.0.0.2", 00:20:36.705 "trsvcid": "4420" 00:20:36.705 }, 00:20:36.705 "peer_address": { 00:20:36.705 "trtype": "TCP", 00:20:36.705 "adrfam": "IPv4", 00:20:36.705 "traddr": "10.0.0.1", 00:20:36.705 "trsvcid": "55114" 00:20:36.705 }, 00:20:36.705 "auth": { 00:20:36.705 "state": "completed", 00:20:36.705 "digest": "sha384", 00:20:36.705 "dhgroup": "ffdhe4096" 00:20:36.705 } 00:20:36.705 } 00:20:36.705 ]' 00:20:36.705 15:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.705 15:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.705 15:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.705 15:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:36.705 15:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.965 15:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.965 15:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.965 15:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.226 15:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:20:37.226 15:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:20:39.139 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.139 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:39.139 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.139 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.139 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.139 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:39.139 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.139 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:39.139 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:39.399 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:39.399 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.399 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:39.399 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:39.399 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:39.399 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.399 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.399 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.399 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.399 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.399 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.399 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.399 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.802 00:20:40.802 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.802 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.802 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.436 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.436 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.436 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.436 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.436 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.436 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.436 { 00:20:41.436 "cntlid": 81, 00:20:41.436 "qid": 0, 00:20:41.436 "state": "enabled", 00:20:41.436 "thread": "nvmf_tgt_poll_group_000", 00:20:41.436 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:41.436 "listen_address": { 00:20:41.436 "trtype": "TCP", 00:20:41.436 "adrfam": "IPv4", 00:20:41.436 "traddr": "10.0.0.2", 00:20:41.436 "trsvcid": "4420" 00:20:41.436 }, 00:20:41.436 "peer_address": { 00:20:41.436 "trtype": "TCP", 00:20:41.436 "adrfam": "IPv4", 00:20:41.436 "traddr": "10.0.0.1", 00:20:41.436 "trsvcid": "60450" 00:20:41.436 }, 00:20:41.436 "auth": { 00:20:41.436 "state": "completed", 00:20:41.436 "digest": "sha384", 00:20:41.436 "dhgroup": "ffdhe6144" 00:20:41.436 } 00:20:41.436 } 00:20:41.436 ]' 00:20:41.436 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.436 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.436 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.436 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:41.436 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.436 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.436 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.436 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.378 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:20:42.378 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:20:44.288 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.288 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:44.288 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.288 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.288 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.288 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.288 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:44.288 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:44.288 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:44.288 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.288 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:44.288 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:44.288 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:44.288 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.288 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.288 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.288 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.288 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.288 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.288 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.288 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.228 00:20:45.229 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.229 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.229 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.487 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.487 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.487 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.487 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.747 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.747 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.747 { 00:20:45.747 "cntlid": 83, 00:20:45.747 "qid": 0, 00:20:45.747 "state": "enabled", 00:20:45.747 "thread": "nvmf_tgt_poll_group_000", 00:20:45.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:45.747 "listen_address": { 00:20:45.747 "trtype": "TCP", 00:20:45.747 "adrfam": "IPv4", 00:20:45.747 "traddr": "10.0.0.2", 00:20:45.747 "trsvcid": "4420" 00:20:45.747 }, 00:20:45.748 "peer_address": { 00:20:45.748 "trtype": "TCP", 00:20:45.748 "adrfam": "IPv4", 00:20:45.748 "traddr": "10.0.0.1", 00:20:45.748 "trsvcid": "60476" 00:20:45.748 }, 00:20:45.748 "auth": { 00:20:45.748 "state": "completed", 00:20:45.748 "digest": "sha384", 00:20:45.748 "dhgroup": "ffdhe6144" 00:20:45.748 } 00:20:45.748 } 00:20:45.748 ]' 00:20:45.748 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.748 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.748 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.748 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:45.748 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.007 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.007 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.007 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.267 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:20:46.267 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:20:48.181 15:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.181 15:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:48.181 15:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.181 15:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.181 15:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.181 15:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.181 15:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:48.181 15:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:48.753 15:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:48.753 15:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.753 15:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:48.753 15:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:48.753 15:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:48.753 15:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.753 15:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.753 15:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.753 15:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.753 15:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.753 15:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.753 15:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.753 15:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.699 00:20:49.699 15:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.699 15:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.699 15:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.961 15:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.961 15:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.961 15:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.961 15:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.961 15:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.961 15:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.961 { 00:20:49.961 "cntlid": 85, 00:20:49.961 "qid": 0, 00:20:49.961 "state": "enabled", 00:20:49.961 "thread": "nvmf_tgt_poll_group_000", 00:20:49.961 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:49.961 "listen_address": { 00:20:49.961 "trtype": "TCP", 00:20:49.961 "adrfam": "IPv4", 00:20:49.961 "traddr": "10.0.0.2", 00:20:49.961 "trsvcid": "4420" 00:20:49.961 }, 00:20:49.961 "peer_address": { 00:20:49.961 "trtype": "TCP", 00:20:49.961 "adrfam": "IPv4", 00:20:49.961 "traddr": "10.0.0.1", 00:20:49.961 "trsvcid": "33718" 00:20:49.961 }, 00:20:49.961 "auth": { 00:20:49.961 "state": "completed", 00:20:49.961 "digest": "sha384", 00:20:49.961 "dhgroup": "ffdhe6144" 00:20:49.961 } 00:20:49.961 } 00:20:49.961 ]' 00:20:49.961 15:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.222 15:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.222 15:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.222 15:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:50.222 15:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.222 15:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.222 15:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.222 15:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.165 15:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:20:51.165 15:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:20:53.076 15:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.076 15:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:53.076 15:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.076 15:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.076 15:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.076 15:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.076 15:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:53.076 15:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:53.076 15:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:53.076 15:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.076 15:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:53.076 15:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:53.076 15:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:53.076 15:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.076 15:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:53.076 15:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.076 15:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.076 15:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.076 15:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:53.076 15:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:53.076 15:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:54.017 00:20:54.017 15:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.017 15:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.017 15:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.588 15:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.588 15:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.588 15:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.588 15:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.588 15:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.588 15:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.588 { 00:20:54.588 "cntlid": 87, 00:20:54.588 "qid": 0, 00:20:54.588 "state": "enabled", 00:20:54.588 "thread": "nvmf_tgt_poll_group_000", 00:20:54.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:54.588 "listen_address": { 00:20:54.588 "trtype": "TCP", 00:20:54.588 "adrfam": "IPv4", 00:20:54.588 "traddr": "10.0.0.2", 00:20:54.588 "trsvcid": "4420" 00:20:54.588 }, 00:20:54.588 "peer_address": { 00:20:54.588 "trtype": "TCP", 00:20:54.588 "adrfam": "IPv4", 00:20:54.588 "traddr": "10.0.0.1", 00:20:54.588 "trsvcid": "33750" 00:20:54.588 }, 00:20:54.588 "auth": { 00:20:54.588 "state": "completed", 00:20:54.588 "digest": "sha384", 00:20:54.588 "dhgroup": "ffdhe6144" 00:20:54.588 } 00:20:54.588 } 00:20:54.588 ]' 00:20:54.588 15:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.588 15:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.588 15:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.588 15:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:54.588 15:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.588 15:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.588 15:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.588 15:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.157 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:20:55.157 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:20:57.699 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.699 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:57.699 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.699 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.699 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.699 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:57.699 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.699 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:57.699 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:57.959 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:57.959 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.959 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:57.959 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:57.959 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:57.959 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.959 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.959 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.959 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.959 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.959 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.959 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.960 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.872 00:20:59.873 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.873 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.873 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.873 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.873 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.873 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.873 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.873 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.873 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.873 { 00:20:59.873 "cntlid": 89, 00:20:59.873 "qid": 0, 00:20:59.873 "state": "enabled", 00:20:59.873 "thread": "nvmf_tgt_poll_group_000", 00:20:59.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:59.873 "listen_address": { 00:20:59.873 "trtype": "TCP", 00:20:59.873 "adrfam": "IPv4", 00:20:59.873 "traddr": "10.0.0.2", 00:20:59.873 "trsvcid": "4420" 00:20:59.873 }, 00:20:59.873 "peer_address": { 00:20:59.873 "trtype": "TCP", 00:20:59.873 "adrfam": "IPv4", 00:20:59.873 "traddr": "10.0.0.1", 00:20:59.873 "trsvcid": "35272" 00:20:59.873 }, 00:20:59.873 "auth": { 00:20:59.873 "state": "completed", 00:20:59.873 "digest": "sha384", 00:20:59.873 "dhgroup": "ffdhe8192" 00:20:59.873 } 00:20:59.873 } 00:20:59.873 ]' 00:20:59.873 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.132 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.132 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.132 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:00.132 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.132 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.132 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.132 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.702 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:21:00.702 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:21:02.614 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.614 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:02.614 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.614 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.614 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.614 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.614 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:02.614 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:03.185 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:03.185 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.185 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:03.185 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:03.185 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:03.185 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.185 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.185 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.185 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.185 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.185 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.185 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.185 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.561 00:21:04.561 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.561 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.561 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.132 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.132 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.132 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.132 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.132 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.132 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.132 { 00:21:05.132 "cntlid": 91, 00:21:05.132 "qid": 0, 00:21:05.132 "state": "enabled", 00:21:05.132 "thread": "nvmf_tgt_poll_group_000", 00:21:05.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:05.132 "listen_address": { 00:21:05.132 "trtype": "TCP", 00:21:05.132 "adrfam": "IPv4", 00:21:05.132 "traddr": "10.0.0.2", 00:21:05.132 "trsvcid": "4420" 00:21:05.132 }, 00:21:05.132 "peer_address": { 00:21:05.132 "trtype": "TCP", 00:21:05.132 "adrfam": "IPv4", 00:21:05.132 "traddr": "10.0.0.1", 00:21:05.132 "trsvcid": "35312" 00:21:05.132 }, 00:21:05.132 "auth": { 00:21:05.132 "state": "completed", 00:21:05.132 "digest": "sha384", 00:21:05.132 "dhgroup": "ffdhe8192" 00:21:05.132 } 00:21:05.132 } 00:21:05.132 ]' 00:21:05.132 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.392 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.392 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.392 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:05.392 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.392 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.392 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.392 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.964 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:21:06.222 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:21:07.605 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.605 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:07.605 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.605 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.605 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.605 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.605 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:07.605 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:07.865 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:07.865 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.865 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:07.865 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:07.865 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:07.865 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.865 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.865 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.865 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.865 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.865 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.865 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.865 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.773 00:21:09.773 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.773 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.773 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.033 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.033 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.033 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.033 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.033 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.033 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.033 { 00:21:10.033 "cntlid": 93, 00:21:10.033 "qid": 0, 00:21:10.033 "state": "enabled", 00:21:10.033 "thread": "nvmf_tgt_poll_group_000", 00:21:10.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:10.033 "listen_address": { 00:21:10.033 "trtype": "TCP", 00:21:10.033 "adrfam": "IPv4", 00:21:10.033 "traddr": "10.0.0.2", 00:21:10.033 "trsvcid": "4420" 00:21:10.033 }, 00:21:10.033 "peer_address": { 00:21:10.033 "trtype": "TCP", 00:21:10.033 "adrfam": "IPv4", 00:21:10.033 "traddr": "10.0.0.1", 00:21:10.033 "trsvcid": "46422" 00:21:10.033 }, 00:21:10.033 "auth": { 00:21:10.033 "state": "completed", 00:21:10.033 "digest": "sha384", 00:21:10.033 "dhgroup": "ffdhe8192" 00:21:10.033 } 00:21:10.033 } 00:21:10.033 ]' 00:21:10.034 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.034 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:10.034 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.293 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:10.293 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.293 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.293 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.293 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.937 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:21:10.937 15:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:21:12.842 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.842 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:12.842 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.842 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.842 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.842 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.842 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:12.842 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:13.407 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:13.407 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.407 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:13.407 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:13.407 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:13.407 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.407 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:21:13.407 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.407 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.407 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.407 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:13.407 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:13.407 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:14.788 00:21:15.045 15:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.045 15:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.046 15:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.305 15:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.305 15:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.305 15:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.305 15:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.306 15:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.306 15:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.306 { 00:21:15.306 "cntlid": 95, 00:21:15.306 "qid": 0, 00:21:15.306 "state": "enabled", 00:21:15.306 "thread": "nvmf_tgt_poll_group_000", 00:21:15.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:15.306 "listen_address": { 00:21:15.306 "trtype": "TCP", 00:21:15.306 "adrfam": "IPv4", 00:21:15.306 "traddr": "10.0.0.2", 00:21:15.306 "trsvcid": "4420" 00:21:15.306 }, 00:21:15.306 "peer_address": { 00:21:15.306 "trtype": "TCP", 00:21:15.306 "adrfam": "IPv4", 00:21:15.306 "traddr": "10.0.0.1", 00:21:15.306 "trsvcid": "46454" 00:21:15.306 }, 00:21:15.306 "auth": { 00:21:15.306 "state": "completed", 00:21:15.306 "digest": "sha384", 00:21:15.306 "dhgroup": "ffdhe8192" 00:21:15.306 } 00:21:15.306 } 00:21:15.306 ]' 00:21:15.306 15:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.306 15:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:15.306 15:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.566 15:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:15.566 15:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.566 15:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.566 15:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.566 15:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.826 15:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:21:15.826 15:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:21:17.204 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.204 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:17.204 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.204 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.204 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.204 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:17.204 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:17.204 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.204 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:17.204 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:17.465 15:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:17.465 15:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.465 15:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:17.465 15:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:17.465 15:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:17.465 15:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.465 15:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.465 15:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.465 15:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.465 15:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.465 15:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.465 15:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.465 15:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.406 00:21:18.406 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.406 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.406 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.666 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.666 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.666 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.666 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.666 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.666 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.666 { 00:21:18.666 "cntlid": 97, 00:21:18.666 "qid": 0, 00:21:18.666 "state": "enabled", 00:21:18.666 "thread": "nvmf_tgt_poll_group_000", 00:21:18.666 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:18.666 "listen_address": { 00:21:18.666 "trtype": "TCP", 00:21:18.666 "adrfam": "IPv4", 00:21:18.666 "traddr": "10.0.0.2", 00:21:18.666 "trsvcid": "4420" 00:21:18.666 }, 00:21:18.666 "peer_address": { 00:21:18.666 "trtype": "TCP", 00:21:18.666 "adrfam": "IPv4", 00:21:18.666 "traddr": "10.0.0.1", 00:21:18.666 "trsvcid": "50828" 00:21:18.666 }, 00:21:18.666 "auth": { 00:21:18.666 "state": "completed", 00:21:18.666 "digest": "sha512", 00:21:18.666 "dhgroup": "null" 00:21:18.666 } 00:21:18.666 } 00:21:18.666 ]' 00:21:18.666 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.926 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.926 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.926 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:18.926 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.926 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.926 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.926 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.496 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:21:19.496 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:21:21.432 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.432 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:21.432 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.432 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.432 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.432 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.432 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:21.432 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:22.373 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:22.373 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.373 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:22.373 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:22.373 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:22.373 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.373 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.373 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.373 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.373 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.373 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.373 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.373 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.942 00:21:22.942 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.942 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.942 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.199 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.199 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.199 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.199 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.199 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.199 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.199 { 00:21:23.199 "cntlid": 99, 00:21:23.199 "qid": 0, 00:21:23.199 "state": "enabled", 00:21:23.199 "thread": "nvmf_tgt_poll_group_000", 00:21:23.200 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:23.200 "listen_address": { 00:21:23.200 "trtype": "TCP", 00:21:23.200 "adrfam": "IPv4", 00:21:23.200 "traddr": "10.0.0.2", 00:21:23.200 "trsvcid": "4420" 00:21:23.200 }, 00:21:23.200 "peer_address": { 00:21:23.200 "trtype": "TCP", 00:21:23.200 "adrfam": "IPv4", 00:21:23.200 "traddr": "10.0.0.1", 00:21:23.200 "trsvcid": "50856" 00:21:23.200 }, 00:21:23.200 "auth": { 00:21:23.200 "state": "completed", 00:21:23.200 "digest": "sha512", 00:21:23.200 "dhgroup": "null" 00:21:23.200 } 00:21:23.200 } 00:21:23.200 ]' 00:21:23.200 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.200 15:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.200 15:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.459 15:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:23.459 15:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.459 15:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.459 15:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.459 15:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.717 15:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:21:23.717 15:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:21:25.619 15:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.619 15:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:25.619 15:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.619 15:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.619 15:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.619 15:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.619 15:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:25.619 15:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:25.880 15:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:25.880 15:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.880 15:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:25.880 15:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:25.880 15:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:25.880 15:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.880 15:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.880 15:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.880 15:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.880 15:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.880 15:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.880 15:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.880 15:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.140 00:21:26.140 15:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.140 15:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.140 15:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.798 15:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.798 15:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.798 15:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.798 15:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.798 15:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.798 15:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.798 { 00:21:26.798 "cntlid": 101, 00:21:26.798 "qid": 0, 00:21:26.798 "state": "enabled", 00:21:26.798 "thread": "nvmf_tgt_poll_group_000", 00:21:26.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:26.798 "listen_address": { 00:21:26.798 "trtype": "TCP", 00:21:26.798 "adrfam": "IPv4", 00:21:26.798 "traddr": "10.0.0.2", 00:21:26.798 "trsvcid": "4420" 00:21:26.798 }, 00:21:26.798 "peer_address": { 00:21:26.798 "trtype": "TCP", 00:21:26.798 "adrfam": "IPv4", 00:21:26.798 "traddr": "10.0.0.1", 00:21:26.798 "trsvcid": "50876" 00:21:26.798 }, 00:21:26.798 "auth": { 00:21:26.798 "state": "completed", 00:21:26.798 "digest": "sha512", 00:21:26.798 "dhgroup": "null" 00:21:26.798 } 00:21:26.799 } 00:21:26.799 ]' 00:21:26.799 15:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.799 15:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.799 15:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.799 15:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:26.799 15:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.799 15:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.799 15:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.799 15:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.738 15:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:21:27.738 15:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:21:29.116 15:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.116 15:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:29.116 15:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.116 15:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.116 15:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.116 15:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.116 15:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:29.116 15:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:29.684 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:29.684 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.684 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:29.684 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:29.684 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:29.684 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.684 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:21:29.684 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.684 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.684 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.684 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:29.684 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:29.684 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:30.253 00:21:30.253 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.253 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.253 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.512 15:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.512 15:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.512 15:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.512 15:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.512 15:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.512 15:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.512 { 00:21:30.512 "cntlid": 103, 00:21:30.512 "qid": 0, 00:21:30.512 "state": "enabled", 00:21:30.512 "thread": "nvmf_tgt_poll_group_000", 00:21:30.512 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:30.512 "listen_address": { 00:21:30.512 "trtype": "TCP", 00:21:30.512 "adrfam": "IPv4", 00:21:30.512 "traddr": "10.0.0.2", 00:21:30.512 "trsvcid": "4420" 00:21:30.512 }, 00:21:30.512 "peer_address": { 00:21:30.512 "trtype": "TCP", 00:21:30.512 "adrfam": "IPv4", 00:21:30.512 "traddr": "10.0.0.1", 00:21:30.512 "trsvcid": "47534" 00:21:30.512 }, 00:21:30.512 "auth": { 00:21:30.512 "state": "completed", 00:21:30.512 "digest": "sha512", 00:21:30.512 "dhgroup": "null" 00:21:30.512 } 00:21:30.512 } 00:21:30.512 ]' 00:21:30.512 15:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.772 15:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.772 15:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.772 15:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:30.772 15:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.772 15:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.772 15:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.772 15:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.341 15:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:21:31.341 15:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:21:33.252 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.252 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:33.252 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.252 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.252 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.252 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:33.252 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.252 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:33.252 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:33.822 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:33.822 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.822 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:33.822 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:33.822 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:33.822 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.822 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.822 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.822 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.822 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.822 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.822 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.822 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.082 00:21:34.082 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.082 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.082 15:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.648 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.648 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.648 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.648 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.648 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.648 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.648 { 00:21:34.648 "cntlid": 105, 00:21:34.648 "qid": 0, 00:21:34.648 "state": "enabled", 00:21:34.648 "thread": "nvmf_tgt_poll_group_000", 00:21:34.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:34.648 "listen_address": { 00:21:34.648 "trtype": "TCP", 00:21:34.648 "adrfam": "IPv4", 00:21:34.648 "traddr": "10.0.0.2", 00:21:34.648 "trsvcid": "4420" 00:21:34.648 }, 00:21:34.648 "peer_address": { 00:21:34.648 "trtype": "TCP", 00:21:34.648 "adrfam": "IPv4", 00:21:34.648 "traddr": "10.0.0.1", 00:21:34.648 "trsvcid": "47566" 00:21:34.648 }, 00:21:34.648 "auth": { 00:21:34.648 "state": "completed", 00:21:34.648 "digest": "sha512", 00:21:34.648 "dhgroup": "ffdhe2048" 00:21:34.648 } 00:21:34.648 } 00:21:34.648 ]' 00:21:34.648 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.648 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.648 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.648 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:34.648 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.907 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.907 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.907 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.167 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:21:35.167 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:21:37.076 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.077 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:37.077 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.077 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.077 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.077 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.077 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:37.077 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:37.336 15:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:37.336 15:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.336 15:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:37.336 15:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:37.336 15:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:37.336 15:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.336 15:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.336 15:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.336 15:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.336 15:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.336 15:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.336 15:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.336 15:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.275 00:21:38.275 15:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.275 15:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.275 15:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.846 15:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.846 15:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.846 15:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.846 15:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.846 15:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.846 15:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.846 { 00:21:38.846 "cntlid": 107, 00:21:38.846 "qid": 0, 00:21:38.846 "state": "enabled", 00:21:38.846 "thread": "nvmf_tgt_poll_group_000", 00:21:38.846 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:38.846 "listen_address": { 00:21:38.846 "trtype": "TCP", 00:21:38.846 "adrfam": "IPv4", 00:21:38.846 "traddr": "10.0.0.2", 00:21:38.846 "trsvcid": "4420" 00:21:38.846 }, 00:21:38.846 "peer_address": { 00:21:38.846 "trtype": "TCP", 00:21:38.846 "adrfam": "IPv4", 00:21:38.846 "traddr": "10.0.0.1", 00:21:38.846 "trsvcid": "42838" 00:21:38.846 }, 00:21:38.846 "auth": { 00:21:38.846 "state": "completed", 00:21:38.846 "digest": "sha512", 00:21:38.846 "dhgroup": "ffdhe2048" 00:21:38.846 } 00:21:38.846 } 00:21:38.846 ]' 00:21:38.846 15:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.846 15:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.846 15:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.107 15:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:39.107 15:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.107 15:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.107 15:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.107 15:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.675 15:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:21:39.675 15:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:21:41.584 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.584 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:41.584 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.584 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.584 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.584 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.584 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:41.584 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:41.843 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:41.843 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.843 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:41.843 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:41.843 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:41.843 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.843 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.843 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.843 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.843 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.843 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.843 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.843 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.101 00:21:42.360 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.360 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.360 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.620 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.620 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.620 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.620 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.620 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.620 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.620 { 00:21:42.620 "cntlid": 109, 00:21:42.620 "qid": 0, 00:21:42.620 "state": "enabled", 00:21:42.620 "thread": "nvmf_tgt_poll_group_000", 00:21:42.620 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:42.620 "listen_address": { 00:21:42.620 "trtype": "TCP", 00:21:42.620 "adrfam": "IPv4", 00:21:42.620 "traddr": "10.0.0.2", 00:21:42.620 "trsvcid": "4420" 00:21:42.620 }, 00:21:42.620 "peer_address": { 00:21:42.620 "trtype": "TCP", 00:21:42.620 "adrfam": "IPv4", 00:21:42.620 "traddr": "10.0.0.1", 00:21:42.620 "trsvcid": "42872" 00:21:42.620 }, 00:21:42.620 "auth": { 00:21:42.620 "state": "completed", 00:21:42.620 "digest": "sha512", 00:21:42.620 "dhgroup": "ffdhe2048" 00:21:42.620 } 00:21:42.620 } 00:21:42.620 ]' 00:21:42.620 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.620 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.620 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.620 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:42.620 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.879 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.879 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.879 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.137 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:21:43.137 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:21:45.044 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.044 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:45.044 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.044 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.044 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.044 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:45.044 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:45.044 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:45.614 15:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:45.614 15:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.614 15:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:45.614 15:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:45.614 15:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:45.614 15:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.614 15:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:21:45.614 15:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.614 15:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.614 15:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.614 15:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:45.614 15:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:45.614 15:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:45.874 00:21:46.133 15:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.133 15:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.133 15:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.392 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.392 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.392 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.392 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.392 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.392 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.392 { 00:21:46.392 "cntlid": 111, 00:21:46.392 "qid": 0, 00:21:46.392 "state": "enabled", 00:21:46.392 "thread": "nvmf_tgt_poll_group_000", 00:21:46.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:46.392 "listen_address": { 00:21:46.392 "trtype": "TCP", 00:21:46.392 "adrfam": "IPv4", 00:21:46.392 "traddr": "10.0.0.2", 00:21:46.392 "trsvcid": "4420" 00:21:46.392 }, 00:21:46.392 "peer_address": { 00:21:46.392 "trtype": "TCP", 00:21:46.392 "adrfam": "IPv4", 00:21:46.392 "traddr": "10.0.0.1", 00:21:46.392 "trsvcid": "42894" 00:21:46.392 }, 00:21:46.392 "auth": { 00:21:46.392 "state": "completed", 00:21:46.392 "digest": "sha512", 00:21:46.392 "dhgroup": "ffdhe2048" 00:21:46.392 } 00:21:46.392 } 00:21:46.392 ]' 00:21:46.392 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.392 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.392 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.393 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:46.393 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.393 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.393 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.393 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.331 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:21:47.331 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:21:49.240 15:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.240 15:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:49.240 15:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.240 15:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.240 15:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.240 15:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:49.240 15:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.240 15:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:49.240 15:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:49.240 15:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:49.240 15:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.240 15:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:49.240 15:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:49.240 15:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:49.240 15:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.240 15:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.240 15:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.240 15:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.240 15:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.240 15:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.240 15:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.240 15:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.179 00:21:50.179 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.179 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.179 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.438 15:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.438 15:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.438 15:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.438 15:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.438 15:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.438 15:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.438 { 00:21:50.438 "cntlid": 113, 00:21:50.438 "qid": 0, 00:21:50.438 "state": "enabled", 00:21:50.438 "thread": "nvmf_tgt_poll_group_000", 00:21:50.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:50.438 "listen_address": { 00:21:50.438 "trtype": "TCP", 00:21:50.438 "adrfam": "IPv4", 00:21:50.438 "traddr": "10.0.0.2", 00:21:50.438 "trsvcid": "4420" 00:21:50.438 }, 00:21:50.438 "peer_address": { 00:21:50.438 "trtype": "TCP", 00:21:50.438 "adrfam": "IPv4", 00:21:50.438 "traddr": "10.0.0.1", 00:21:50.438 "trsvcid": "35196" 00:21:50.438 }, 00:21:50.438 "auth": { 00:21:50.438 "state": "completed", 00:21:50.438 "digest": "sha512", 00:21:50.438 "dhgroup": "ffdhe3072" 00:21:50.438 } 00:21:50.438 } 00:21:50.438 ]' 00:21:50.438 15:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.438 15:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.438 15:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.696 15:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:50.696 15:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.696 15:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.696 15:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.696 15:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.262 15:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:21:51.262 15:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:21:53.169 15:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.169 15:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:53.169 15:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.169 15:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.169 15:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.169 15:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.169 15:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:53.169 15:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:53.428 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:53.428 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.428 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:53.428 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:53.428 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:53.428 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.428 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.428 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.428 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.428 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.428 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.428 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.428 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.997 00:21:53.997 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:53.997 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:53.997 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.257 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.257 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.257 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.257 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.257 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.257 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.257 { 00:21:54.257 "cntlid": 115, 00:21:54.257 "qid": 0, 00:21:54.257 "state": "enabled", 00:21:54.257 "thread": "nvmf_tgt_poll_group_000", 00:21:54.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:54.257 "listen_address": { 00:21:54.257 "trtype": "TCP", 00:21:54.257 "adrfam": "IPv4", 00:21:54.257 "traddr": "10.0.0.2", 00:21:54.257 "trsvcid": "4420" 00:21:54.257 }, 00:21:54.257 "peer_address": { 00:21:54.257 "trtype": "TCP", 00:21:54.257 "adrfam": "IPv4", 00:21:54.257 "traddr": "10.0.0.1", 00:21:54.257 "trsvcid": "35228" 00:21:54.257 }, 00:21:54.257 "auth": { 00:21:54.257 "state": "completed", 00:21:54.257 "digest": "sha512", 00:21:54.257 "dhgroup": "ffdhe3072" 00:21:54.257 } 00:21:54.257 } 00:21:54.257 ]' 00:21:54.257 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.517 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.517 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.517 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:54.517 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.517 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.517 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.517 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.086 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:21:55.086 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:21:56.992 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.993 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:56.993 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.993 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.993 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.993 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:56.993 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:56.993 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:57.253 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:57.253 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.253 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:57.253 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:57.253 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:57.253 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.253 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.253 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.253 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.253 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.253 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.253 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.253 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.192 00:21:58.192 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:58.192 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.192 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:58.763 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.763 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.763 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.763 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.763 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.763 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.763 { 00:21:58.763 "cntlid": 117, 00:21:58.763 "qid": 0, 00:21:58.763 "state": "enabled", 00:21:58.763 "thread": "nvmf_tgt_poll_group_000", 00:21:58.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:58.763 "listen_address": { 00:21:58.763 "trtype": "TCP", 00:21:58.763 "adrfam": "IPv4", 00:21:58.763 "traddr": "10.0.0.2", 00:21:58.763 "trsvcid": "4420" 00:21:58.763 }, 00:21:58.763 "peer_address": { 00:21:58.763 "trtype": "TCP", 00:21:58.763 "adrfam": "IPv4", 00:21:58.763 "traddr": "10.0.0.1", 00:21:58.763 "trsvcid": "44564" 00:21:58.763 }, 00:21:58.763 "auth": { 00:21:58.763 "state": "completed", 00:21:58.763 "digest": "sha512", 00:21:58.763 "dhgroup": "ffdhe3072" 00:21:58.763 } 00:21:58.763 } 00:21:58.763 ]' 00:21:58.763 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.763 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.763 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.763 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:58.763 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.023 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.023 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.023 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.593 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:21:59.593 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:22:01.502 15:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.760 15:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:01.760 15:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.760 15:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.760 15:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.760 15:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:01.760 15:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:01.760 15:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:02.327 15:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:22:02.327 15:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.327 15:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:02.327 15:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:02.327 15:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:02.327 15:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.327 15:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:22:02.327 15:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.327 15:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.327 15:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.327 15:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:02.327 15:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.327 15:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.587 00:22:02.587 15:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:02.587 15:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.587 15:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.156 15:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.156 15:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.156 15:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.156 15:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.156 15:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.156 15:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.156 { 00:22:03.156 "cntlid": 119, 00:22:03.156 "qid": 0, 00:22:03.156 "state": "enabled", 00:22:03.156 "thread": "nvmf_tgt_poll_group_000", 00:22:03.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:03.156 "listen_address": { 00:22:03.156 "trtype": "TCP", 00:22:03.156 "adrfam": "IPv4", 00:22:03.156 "traddr": "10.0.0.2", 00:22:03.156 "trsvcid": "4420" 00:22:03.156 }, 00:22:03.156 "peer_address": { 00:22:03.156 "trtype": "TCP", 00:22:03.156 "adrfam": "IPv4", 00:22:03.156 "traddr": "10.0.0.1", 00:22:03.156 "trsvcid": "44584" 00:22:03.156 }, 00:22:03.156 "auth": { 00:22:03.156 "state": "completed", 00:22:03.156 "digest": "sha512", 00:22:03.156 "dhgroup": "ffdhe3072" 00:22:03.156 } 00:22:03.156 } 00:22:03.156 ]' 00:22:03.156 15:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.156 15:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.156 15:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.156 15:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:03.156 15:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.156 15:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.156 15:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.156 15:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.724 15:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:22:03.724 15:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:22:05.104 15:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.104 15:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:05.104 15:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.104 15:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.104 15:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.104 15:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:05.104 15:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.104 15:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:05.104 15:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:05.675 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:22:05.675 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:05.675 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:05.675 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:05.675 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:05.675 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.675 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:05.675 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.675 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.675 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.675 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:05.675 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:05.675 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.617 00:22:06.617 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:06.617 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:06.617 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.877 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.877 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.877 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.877 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.877 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.877 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:06.877 { 00:22:06.877 "cntlid": 121, 00:22:06.877 "qid": 0, 00:22:06.877 "state": "enabled", 00:22:06.877 "thread": "nvmf_tgt_poll_group_000", 00:22:06.877 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:06.877 "listen_address": { 00:22:06.877 "trtype": "TCP", 00:22:06.877 "adrfam": "IPv4", 00:22:06.877 "traddr": "10.0.0.2", 00:22:06.877 "trsvcid": "4420" 00:22:06.877 }, 00:22:06.877 "peer_address": { 00:22:06.877 "trtype": "TCP", 00:22:06.877 "adrfam": "IPv4", 00:22:06.877 "traddr": "10.0.0.1", 00:22:06.877 "trsvcid": "44598" 00:22:06.877 }, 00:22:06.877 "auth": { 00:22:06.877 "state": "completed", 00:22:06.877 "digest": "sha512", 00:22:06.877 "dhgroup": "ffdhe4096" 00:22:06.877 } 00:22:06.877 } 00:22:06.877 ]' 00:22:06.877 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:06.877 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.877 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.137 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:07.137 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:07.137 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.137 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.137 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.396 15:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:22:07.396 15:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:22:09.303 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.303 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:09.303 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.303 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.303 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.303 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:09.303 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:09.303 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:09.873 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:22:09.873 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:09.873 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:09.873 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:09.873 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:09.873 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.873 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.873 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.873 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.873 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.873 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.873 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.873 15:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.812 00:22:10.812 15:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:10.812 15:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.812 15:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:11.381 15:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.381 15:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.382 15:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.382 15:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.382 15:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.382 15:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:11.382 { 00:22:11.382 "cntlid": 123, 00:22:11.382 "qid": 0, 00:22:11.382 "state": "enabled", 00:22:11.382 "thread": "nvmf_tgt_poll_group_000", 00:22:11.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:11.382 "listen_address": { 00:22:11.382 "trtype": "TCP", 00:22:11.382 "adrfam": "IPv4", 00:22:11.382 "traddr": "10.0.0.2", 00:22:11.382 "trsvcid": "4420" 00:22:11.382 }, 00:22:11.382 "peer_address": { 00:22:11.382 "trtype": "TCP", 00:22:11.382 "adrfam": "IPv4", 00:22:11.382 "traddr": "10.0.0.1", 00:22:11.382 "trsvcid": "42614" 00:22:11.382 }, 00:22:11.382 "auth": { 00:22:11.382 "state": "completed", 00:22:11.382 "digest": "sha512", 00:22:11.382 "dhgroup": "ffdhe4096" 00:22:11.382 } 00:22:11.382 } 00:22:11.382 ]' 00:22:11.382 15:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:11.382 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:11.382 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.382 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:11.382 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:11.382 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.382 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.382 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.952 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:22:11.952 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:22:13.862 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.862 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:13.862 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.862 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.862 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.862 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:13.863 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:13.863 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:14.433 15:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:22:14.433 15:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:14.433 15:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:14.433 15:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:14.433 15:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:14.433 15:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.433 15:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.433 15:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.433 15:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.433 15:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.433 15:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.433 15:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.433 15:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.011 00:22:15.011 15:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:15.011 15:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:15.011 15:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.581 15:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.581 15:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.581 15:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.581 15:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.581 15:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.581 15:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:15.582 { 00:22:15.582 "cntlid": 125, 00:22:15.582 "qid": 0, 00:22:15.582 "state": "enabled", 00:22:15.582 "thread": "nvmf_tgt_poll_group_000", 00:22:15.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:15.582 "listen_address": { 00:22:15.582 "trtype": "TCP", 00:22:15.582 "adrfam": "IPv4", 00:22:15.582 "traddr": "10.0.0.2", 00:22:15.582 "trsvcid": "4420" 00:22:15.582 }, 00:22:15.582 "peer_address": { 00:22:15.582 "trtype": "TCP", 00:22:15.582 "adrfam": "IPv4", 00:22:15.582 "traddr": "10.0.0.1", 00:22:15.582 "trsvcid": "42646" 00:22:15.582 }, 00:22:15.582 "auth": { 00:22:15.582 "state": "completed", 00:22:15.582 "digest": "sha512", 00:22:15.582 "dhgroup": "ffdhe4096" 00:22:15.582 } 00:22:15.582 } 00:22:15.582 ]' 00:22:15.582 15:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:15.582 15:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:15.582 15:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:15.843 15:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:15.843 15:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:15.843 15:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.843 15:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.843 15:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.158 15:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:22:16.158 15:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:22:18.141 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.141 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:18.141 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.141 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.141 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.141 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:18.141 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:18.141 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:18.400 15:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:22:18.400 15:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:18.400 15:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:18.400 15:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:18.400 15:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:18.400 15:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.400 15:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:22:18.400 15:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.400 15:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.400 15:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.400 15:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:18.400 15:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:18.400 15:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:18.970 00:22:18.970 15:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:18.970 15:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.970 15:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:19.231 15:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.231 15:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.231 15:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.231 15:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.231 15:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.231 15:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:19.231 { 00:22:19.231 "cntlid": 127, 00:22:19.231 "qid": 0, 00:22:19.231 "state": "enabled", 00:22:19.231 "thread": "nvmf_tgt_poll_group_000", 00:22:19.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:19.231 "listen_address": { 00:22:19.231 "trtype": "TCP", 00:22:19.231 "adrfam": "IPv4", 00:22:19.231 "traddr": "10.0.0.2", 00:22:19.231 "trsvcid": "4420" 00:22:19.231 }, 00:22:19.231 "peer_address": { 00:22:19.231 "trtype": "TCP", 00:22:19.231 "adrfam": "IPv4", 00:22:19.231 "traddr": "10.0.0.1", 00:22:19.231 "trsvcid": "50464" 00:22:19.231 }, 00:22:19.231 "auth": { 00:22:19.231 "state": "completed", 00:22:19.231 "digest": "sha512", 00:22:19.231 "dhgroup": "ffdhe4096" 00:22:19.231 } 00:22:19.231 } 00:22:19.231 ]' 00:22:19.231 15:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:19.491 15:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:19.491 15:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:19.491 15:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:19.491 15:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:19.491 15:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.491 15:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.491 15:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.057 15:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:22:20.057 15:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:22:21.965 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.965 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:21.965 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.965 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.965 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.965 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:21.965 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:21.965 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:21.965 15:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:22.536 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:22:22.536 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:22.536 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:22.536 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:22.536 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:22.536 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.536 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.536 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.536 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.536 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.536 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.536 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.536 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:23.472 00:22:23.472 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:23.472 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:23.472 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.732 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.732 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.732 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.732 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.732 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.732 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:23.732 { 00:22:23.732 "cntlid": 129, 00:22:23.732 "qid": 0, 00:22:23.732 "state": "enabled", 00:22:23.732 "thread": "nvmf_tgt_poll_group_000", 00:22:23.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:23.732 "listen_address": { 00:22:23.732 "trtype": "TCP", 00:22:23.732 "adrfam": "IPv4", 00:22:23.732 "traddr": "10.0.0.2", 00:22:23.732 "trsvcid": "4420" 00:22:23.732 }, 00:22:23.732 "peer_address": { 00:22:23.732 "trtype": "TCP", 00:22:23.732 "adrfam": "IPv4", 00:22:23.732 "traddr": "10.0.0.1", 00:22:23.732 "trsvcid": "50496" 00:22:23.732 }, 00:22:23.732 "auth": { 00:22:23.732 "state": "completed", 00:22:23.732 "digest": "sha512", 00:22:23.732 "dhgroup": "ffdhe6144" 00:22:23.732 } 00:22:23.732 } 00:22:23.732 ]' 00:22:23.732 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:23.732 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:23.732 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:23.732 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:23.732 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:23.993 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.993 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.993 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.567 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:22:24.567 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:22:25.945 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.945 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:25.945 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.945 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.945 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.945 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:25.945 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:25.945 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:26.515 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:22:26.515 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:26.515 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:26.515 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:26.515 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:26.515 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.515 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.515 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.515 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.515 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.515 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.515 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.515 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.456 00:22:27.716 15:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:27.716 15:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:27.716 15:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.975 15:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.975 15:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.976 15:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.976 15:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.976 15:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.976 15:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:27.976 { 00:22:27.976 "cntlid": 131, 00:22:27.976 "qid": 0, 00:22:27.976 "state": "enabled", 00:22:27.976 "thread": "nvmf_tgt_poll_group_000", 00:22:27.976 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:27.976 "listen_address": { 00:22:27.976 "trtype": "TCP", 00:22:27.976 "adrfam": "IPv4", 00:22:27.976 "traddr": "10.0.0.2", 00:22:27.976 "trsvcid": "4420" 00:22:27.976 }, 00:22:27.976 "peer_address": { 00:22:27.976 "trtype": "TCP", 00:22:27.976 "adrfam": "IPv4", 00:22:27.976 "traddr": "10.0.0.1", 00:22:27.976 "trsvcid": "50534" 00:22:27.976 }, 00:22:27.976 "auth": { 00:22:27.976 "state": "completed", 00:22:27.976 "digest": "sha512", 00:22:27.976 "dhgroup": "ffdhe6144" 00:22:27.976 } 00:22:27.976 } 00:22:27.976 ]' 00:22:27.976 15:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:28.235 15:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:28.235 15:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:28.235 15:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:28.235 15:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:28.235 15:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.235 15:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.235 15:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.804 15:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:22:28.804 15:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:22:30.716 15:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:30.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:30.716 15:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:30.716 15:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.716 15:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.716 15:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.716 15:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:30.716 15:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:30.716 15:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:31.287 15:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:31.287 15:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:31.287 15:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:31.287 15:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:31.287 15:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:31.287 15:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.287 15:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.287 15:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.287 15:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.287 15:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.287 15:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.287 15:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.287 15:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.227 00:22:32.227 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:32.227 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:32.227 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.798 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.798 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.798 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.798 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.798 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.798 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:32.798 { 00:22:32.798 "cntlid": 133, 00:22:32.798 "qid": 0, 00:22:32.798 "state": "enabled", 00:22:32.798 "thread": "nvmf_tgt_poll_group_000", 00:22:32.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:32.798 "listen_address": { 00:22:32.798 "trtype": "TCP", 00:22:32.798 "adrfam": "IPv4", 00:22:32.798 "traddr": "10.0.0.2", 00:22:32.798 "trsvcid": "4420" 00:22:32.798 }, 00:22:32.798 "peer_address": { 00:22:32.798 "trtype": "TCP", 00:22:32.798 "adrfam": "IPv4", 00:22:32.798 "traddr": "10.0.0.1", 00:22:32.798 "trsvcid": "56598" 00:22:32.798 }, 00:22:32.798 "auth": { 00:22:32.798 "state": "completed", 00:22:32.798 "digest": "sha512", 00:22:32.798 "dhgroup": "ffdhe6144" 00:22:32.798 } 00:22:32.798 } 00:22:32.798 ]' 00:22:32.798 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:32.798 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:32.798 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:32.798 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:32.798 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:32.798 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.798 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.798 15:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.365 15:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:22:33.365 15:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:22:35.276 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:35.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:35.276 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:35.276 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.276 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.276 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.276 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:35.276 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:35.276 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:35.848 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:35.848 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:35.848 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:35.848 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:35.848 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:35.848 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:35.848 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:22:35.848 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.848 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.848 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.848 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:35.848 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:35.848 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:36.790 00:22:36.790 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:36.790 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:36.790 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.050 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.050 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.050 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.050 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.310 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.310 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:37.310 { 00:22:37.310 "cntlid": 135, 00:22:37.310 "qid": 0, 00:22:37.310 "state": "enabled", 00:22:37.310 "thread": "nvmf_tgt_poll_group_000", 00:22:37.310 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:37.310 "listen_address": { 00:22:37.310 "trtype": "TCP", 00:22:37.310 "adrfam": "IPv4", 00:22:37.310 "traddr": "10.0.0.2", 00:22:37.310 "trsvcid": "4420" 00:22:37.310 }, 00:22:37.310 "peer_address": { 00:22:37.310 "trtype": "TCP", 00:22:37.310 "adrfam": "IPv4", 00:22:37.310 "traddr": "10.0.0.1", 00:22:37.310 "trsvcid": "56634" 00:22:37.310 }, 00:22:37.310 "auth": { 00:22:37.310 "state": "completed", 00:22:37.310 "digest": "sha512", 00:22:37.310 "dhgroup": "ffdhe6144" 00:22:37.310 } 00:22:37.310 } 00:22:37.310 ]' 00:22:37.310 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:37.310 15:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:37.310 15:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:37.310 15:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:37.310 15:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:37.310 15:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.310 15:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.310 15:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.880 15:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:22:37.880 15:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:22:40.424 15:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:40.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:40.424 15:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:40.424 15:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.424 15:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.424 15:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.424 15:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:40.424 15:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:40.424 15:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:40.424 15:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:40.424 15:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:40.424 15:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:40.424 15:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:40.424 15:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:40.424 15:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:40.424 15:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:40.424 15:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.424 15:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.424 15:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.424 15:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.424 15:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.424 15:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.424 15:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:42.333 00:22:42.333 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:42.333 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:42.333 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.907 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.907 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:42.907 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.907 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.907 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.907 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:42.907 { 00:22:42.907 "cntlid": 137, 00:22:42.907 "qid": 0, 00:22:42.907 "state": "enabled", 00:22:42.907 "thread": "nvmf_tgt_poll_group_000", 00:22:42.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:42.907 "listen_address": { 00:22:42.907 "trtype": "TCP", 00:22:42.907 "adrfam": "IPv4", 00:22:42.907 "traddr": "10.0.0.2", 00:22:42.907 "trsvcid": "4420" 00:22:42.907 }, 00:22:42.907 "peer_address": { 00:22:42.907 "trtype": "TCP", 00:22:42.907 "adrfam": "IPv4", 00:22:42.907 "traddr": "10.0.0.1", 00:22:42.907 "trsvcid": "54552" 00:22:42.908 }, 00:22:42.908 "auth": { 00:22:42.908 "state": "completed", 00:22:42.908 "digest": "sha512", 00:22:42.908 "dhgroup": "ffdhe8192" 00:22:42.908 } 00:22:42.908 } 00:22:42.908 ]' 00:22:42.908 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:42.908 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:42.908 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:43.169 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:43.169 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:43.169 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.169 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.169 15:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:43.736 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:22:43.736 15:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:22:45.691 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:45.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:45.691 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:45.691 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.691 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.691 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.691 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:45.691 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:45.691 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:46.262 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:46.262 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:46.262 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:46.262 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:46.262 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:46.262 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:46.262 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.262 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.262 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.262 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.262 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.262 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.262 15:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:48.171 00:22:48.171 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:48.171 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:48.171 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.110 15:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.110 15:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:49.110 15:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.110 15:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.110 15:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.110 15:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:49.110 { 00:22:49.110 "cntlid": 139, 00:22:49.110 "qid": 0, 00:22:49.110 "state": "enabled", 00:22:49.110 "thread": "nvmf_tgt_poll_group_000", 00:22:49.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:49.110 "listen_address": { 00:22:49.110 "trtype": "TCP", 00:22:49.110 "adrfam": "IPv4", 00:22:49.110 "traddr": "10.0.0.2", 00:22:49.110 "trsvcid": "4420" 00:22:49.110 }, 00:22:49.110 "peer_address": { 00:22:49.110 "trtype": "TCP", 00:22:49.110 "adrfam": "IPv4", 00:22:49.110 "traddr": "10.0.0.1", 00:22:49.110 "trsvcid": "54578" 00:22:49.110 }, 00:22:49.110 "auth": { 00:22:49.110 "state": "completed", 00:22:49.110 "digest": "sha512", 00:22:49.110 "dhgroup": "ffdhe8192" 00:22:49.110 } 00:22:49.110 } 00:22:49.110 ]' 00:22:49.110 15:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:49.110 15:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:49.110 15:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:49.110 15:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:49.110 15:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:49.110 15:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:49.110 15:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:49.110 15:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:49.679 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:22:49.680 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: --dhchap-ctrl-secret DHHC-1:02:YjkxMjA5MDQyNDYyZDNhZDNmOThlOTViYTdlYmQwYzc2OTM1MTE1NTZiMjI0YjBl1cI11Q==: 00:22:51.059 15:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:51.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:51.320 15:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:51.320 15:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.320 15:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.320 15:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.320 15:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:51.320 15:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:51.320 15:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:51.890 15:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:51.890 15:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:51.890 15:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:51.890 15:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:51.890 15:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:51.890 15:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:51.890 15:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.890 15:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.890 15:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.890 15:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.890 15:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.890 15:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.890 15:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:53.798 00:22:53.798 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:53.798 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:53.798 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:54.058 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.058 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:54.058 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.058 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.058 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.058 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:54.058 { 00:22:54.058 "cntlid": 141, 00:22:54.058 "qid": 0, 00:22:54.058 "state": "enabled", 00:22:54.058 "thread": "nvmf_tgt_poll_group_000", 00:22:54.058 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:54.058 "listen_address": { 00:22:54.058 "trtype": "TCP", 00:22:54.058 "adrfam": "IPv4", 00:22:54.058 "traddr": "10.0.0.2", 00:22:54.058 "trsvcid": "4420" 00:22:54.058 }, 00:22:54.058 "peer_address": { 00:22:54.058 "trtype": "TCP", 00:22:54.058 "adrfam": "IPv4", 00:22:54.058 "traddr": "10.0.0.1", 00:22:54.058 "trsvcid": "48444" 00:22:54.058 }, 00:22:54.058 "auth": { 00:22:54.058 "state": "completed", 00:22:54.058 "digest": "sha512", 00:22:54.058 "dhgroup": "ffdhe8192" 00:22:54.058 } 00:22:54.058 } 00:22:54.058 ]' 00:22:54.058 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:54.317 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:54.317 15:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:54.317 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:54.317 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:54.317 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:54.317 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:54.317 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:54.888 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:22:54.888 15:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:01:Y2ZkMTJmZGFjNWQzZTM4OTk2NGE2ZDJkM2MxODZhNjKwn3wh: 00:22:56.798 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:56.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:56.798 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:56.798 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.798 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.798 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.798 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:56.798 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:56.798 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:57.058 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:57.058 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:57.059 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:57.059 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:57.059 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:57.059 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:57.059 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:22:57.059 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.059 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.059 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.059 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:57.059 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:57.059 15:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:58.443 00:22:58.443 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:58.443 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:58.443 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.014 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.014 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:59.014 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.014 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.014 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.014 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:59.014 { 00:22:59.014 "cntlid": 143, 00:22:59.014 "qid": 0, 00:22:59.014 "state": "enabled", 00:22:59.014 "thread": "nvmf_tgt_poll_group_000", 00:22:59.014 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:59.014 "listen_address": { 00:22:59.014 "trtype": "TCP", 00:22:59.014 "adrfam": "IPv4", 00:22:59.014 "traddr": "10.0.0.2", 00:22:59.014 "trsvcid": "4420" 00:22:59.014 }, 00:22:59.014 "peer_address": { 00:22:59.014 "trtype": "TCP", 00:22:59.014 "adrfam": "IPv4", 00:22:59.014 "traddr": "10.0.0.1", 00:22:59.014 "trsvcid": "48482" 00:22:59.014 }, 00:22:59.014 "auth": { 00:22:59.014 "state": "completed", 00:22:59.014 "digest": "sha512", 00:22:59.014 "dhgroup": "ffdhe8192" 00:22:59.014 } 00:22:59.014 } 00:22:59.014 ]' 00:22:59.014 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:59.014 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:59.014 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:59.014 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:59.014 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:59.014 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:59.014 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.014 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.585 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:22:59.585 15:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:23:00.964 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:00.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:00.964 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:00.964 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.964 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.964 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.964 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:00.964 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:23:00.964 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:00.964 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:00.964 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:00.964 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:01.534 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:23:01.534 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:01.534 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:01.535 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:01.535 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:01.535 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:01.535 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:01.535 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.535 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.535 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.535 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:01.535 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:01.535 15:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.475 00:23:02.475 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:02.475 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:02.475 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.734 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.994 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:02.994 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.994 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.994 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.994 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:02.994 { 00:23:02.994 "cntlid": 145, 00:23:02.994 "qid": 0, 00:23:02.994 "state": "enabled", 00:23:02.994 "thread": "nvmf_tgt_poll_group_000", 00:23:02.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:02.994 "listen_address": { 00:23:02.994 "trtype": "TCP", 00:23:02.994 "adrfam": "IPv4", 00:23:02.994 "traddr": "10.0.0.2", 00:23:02.994 "trsvcid": "4420" 00:23:02.994 }, 00:23:02.994 "peer_address": { 00:23:02.994 "trtype": "TCP", 00:23:02.994 "adrfam": "IPv4", 00:23:02.994 "traddr": "10.0.0.1", 00:23:02.994 "trsvcid": "46862" 00:23:02.994 }, 00:23:02.994 "auth": { 00:23:02.994 "state": "completed", 00:23:02.994 "digest": "sha512", 00:23:02.994 "dhgroup": "ffdhe8192" 00:23:02.994 } 00:23:02.994 } 00:23:02.994 ]' 00:23:02.994 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:02.994 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:02.994 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:02.994 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:02.994 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:02.995 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:02.995 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:02.995 15:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:03.929 15:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:23:03.929 15:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OWRiMTFjN2NhMTU3YmMyZmY0YjVhZDhiZDEyMDdlOWQ1NDg2YjNkYWJiMGM1M2U2a5gHBw==: --dhchap-ctrl-secret DHHC-1:03:MTMxM2I5ZDVlMTYwYzk0NTZjNzFkODljYTZjOTZiN2E4OWI1ZjE2MmYyNWVmMzExZmU3MjcyOTc4Y2ZmYmM3ZAxfYBY=: 00:23:05.307 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:05.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:05.307 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:05.307 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.307 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.307 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.307 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:23:05.307 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.307 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.307 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.307 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:23:05.307 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:05.307 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:23:05.307 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:05.307 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:05.307 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:05.307 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:05.307 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:23:05.308 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:05.308 15:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:07.218 request: 00:23:07.218 { 00:23:07.218 "name": "nvme0", 00:23:07.218 "trtype": "tcp", 00:23:07.218 "traddr": "10.0.0.2", 00:23:07.218 "adrfam": "ipv4", 00:23:07.219 "trsvcid": "4420", 00:23:07.219 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:07.219 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:07.219 "prchk_reftag": false, 00:23:07.219 "prchk_guard": false, 00:23:07.219 "hdgst": false, 00:23:07.219 "ddgst": false, 00:23:07.219 "dhchap_key": "key2", 00:23:07.219 "allow_unrecognized_csi": false, 00:23:07.219 "method": "bdev_nvme_attach_controller", 00:23:07.219 "req_id": 1 00:23:07.219 } 00:23:07.219 Got JSON-RPC error response 00:23:07.219 response: 00:23:07.219 { 00:23:07.219 "code": -5, 00:23:07.219 "message": "Input/output error" 00:23:07.219 } 00:23:07.219 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:07.219 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:07.219 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:07.219 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:07.219 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:07.219 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.219 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.219 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.219 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:07.219 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.219 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.219 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.219 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:07.219 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:07.219 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:07.219 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:07.219 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.219 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:07.219 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.219 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:07.219 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:07.219 15:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:09.127 request: 00:23:09.127 { 00:23:09.127 "name": "nvme0", 00:23:09.127 "trtype": "tcp", 00:23:09.127 "traddr": "10.0.0.2", 00:23:09.127 "adrfam": "ipv4", 00:23:09.127 "trsvcid": "4420", 00:23:09.127 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:09.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:09.127 "prchk_reftag": false, 00:23:09.127 "prchk_guard": false, 00:23:09.127 "hdgst": false, 00:23:09.127 "ddgst": false, 00:23:09.127 "dhchap_key": "key1", 00:23:09.127 "dhchap_ctrlr_key": "ckey2", 00:23:09.127 "allow_unrecognized_csi": false, 00:23:09.127 "method": "bdev_nvme_attach_controller", 00:23:09.127 "req_id": 1 00:23:09.127 } 00:23:09.127 Got JSON-RPC error response 00:23:09.127 response: 00:23:09.127 { 00:23:09.127 "code": -5, 00:23:09.127 "message": "Input/output error" 00:23:09.127 } 00:23:09.127 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:09.127 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:09.127 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:09.127 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:09.127 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:09.127 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.127 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.127 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.127 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:23:09.127 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.127 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.127 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.127 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:09.127 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:09.127 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:09.127 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:09.127 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:09.127 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:09.127 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:09.127 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:09.127 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:09.127 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.506 request: 00:23:10.506 { 00:23:10.506 "name": "nvme0", 00:23:10.506 "trtype": "tcp", 00:23:10.506 "traddr": "10.0.0.2", 00:23:10.506 "adrfam": "ipv4", 00:23:10.506 "trsvcid": "4420", 00:23:10.506 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:10.506 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:10.506 "prchk_reftag": false, 00:23:10.506 "prchk_guard": false, 00:23:10.506 "hdgst": false, 00:23:10.506 "ddgst": false, 00:23:10.506 "dhchap_key": "key1", 00:23:10.506 "dhchap_ctrlr_key": "ckey1", 00:23:10.506 "allow_unrecognized_csi": false, 00:23:10.506 "method": "bdev_nvme_attach_controller", 00:23:10.506 "req_id": 1 00:23:10.506 } 00:23:10.506 Got JSON-RPC error response 00:23:10.506 response: 00:23:10.506 { 00:23:10.506 "code": -5, 00:23:10.506 "message": "Input/output error" 00:23:10.506 } 00:23:10.506 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:10.506 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:10.506 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:10.506 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:10.506 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:10.506 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.506 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.506 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.506 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3165587 00:23:10.506 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3165587 ']' 00:23:10.506 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3165587 00:23:10.506 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:10.506 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:10.506 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3165587 00:23:10.506 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:10.506 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:10.506 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3165587' 00:23:10.506 killing process with pid 3165587 00:23:10.506 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3165587 00:23:10.507 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3165587 00:23:10.767 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:23:10.767 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:10.767 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:10.767 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.767 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3203278 00:23:10.767 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:23:10.767 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3203278 00:23:10.767 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3203278 ']' 00:23:10.767 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.767 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:10.767 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.767 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:10.767 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.706 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:11.706 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:23:11.706 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:11.706 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:11.706 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.706 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.706 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:23:11.706 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3203278 00:23:11.706 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3203278 ']' 00:23:11.706 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.706 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:11.706 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.706 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:11.707 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.643 null0 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.NGG 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.nNK ]] 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nNK 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.QHV 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.HuT ]] 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.HuT 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.lJT 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.ONt ]] 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ONt 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.aV8 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:12.643 15:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:15.182 nvme0n1 00:23:15.182 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:15.182 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:15.182 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:15.749 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.749 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:15.749 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.749 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.749 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.749 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:15.749 { 00:23:15.749 "cntlid": 1, 00:23:15.749 "qid": 0, 00:23:15.749 "state": "enabled", 00:23:15.749 "thread": "nvmf_tgt_poll_group_000", 00:23:15.749 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:15.749 "listen_address": { 00:23:15.749 "trtype": "TCP", 00:23:15.749 "adrfam": "IPv4", 00:23:15.749 "traddr": "10.0.0.2", 00:23:15.749 "trsvcid": "4420" 00:23:15.749 }, 00:23:15.749 "peer_address": { 00:23:15.749 "trtype": "TCP", 00:23:15.749 "adrfam": "IPv4", 00:23:15.749 "traddr": "10.0.0.1", 00:23:15.749 "trsvcid": "40752" 00:23:15.749 }, 00:23:15.749 "auth": { 00:23:15.749 "state": "completed", 00:23:15.749 "digest": "sha512", 00:23:15.749 "dhgroup": "ffdhe8192" 00:23:15.749 } 00:23:15.749 } 00:23:15.749 ]' 00:23:15.749 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:15.749 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:15.749 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:15.749 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:15.749 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:16.006 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:16.007 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:16.007 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:16.266 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:23:16.266 15:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:23:18.171 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:18.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:18.171 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:18.171 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.171 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.171 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.171 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:23:18.171 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.171 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.171 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.171 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:23:18.171 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:23:18.742 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:18.742 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:18.742 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:18.742 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:18.742 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:18.742 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:18.742 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:18.742 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:18.742 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:18.742 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:19.310 request: 00:23:19.310 { 00:23:19.310 "name": "nvme0", 00:23:19.310 "trtype": "tcp", 00:23:19.310 "traddr": "10.0.0.2", 00:23:19.310 "adrfam": "ipv4", 00:23:19.310 "trsvcid": "4420", 00:23:19.310 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:19.310 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:19.310 "prchk_reftag": false, 00:23:19.310 "prchk_guard": false, 00:23:19.310 "hdgst": false, 00:23:19.310 "ddgst": false, 00:23:19.310 "dhchap_key": "key3", 00:23:19.310 "allow_unrecognized_csi": false, 00:23:19.310 "method": "bdev_nvme_attach_controller", 00:23:19.310 "req_id": 1 00:23:19.310 } 00:23:19.310 Got JSON-RPC error response 00:23:19.310 response: 00:23:19.310 { 00:23:19.310 "code": -5, 00:23:19.310 "message": "Input/output error" 00:23:19.310 } 00:23:19.310 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:19.310 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:19.310 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:19.310 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:19.310 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:23:19.310 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:23:19.310 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:19.310 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:19.883 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:19.883 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:19.883 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:19.883 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:19.883 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:19.883 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:19.883 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:19.883 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:19.883 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:19.883 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:20.452 request: 00:23:20.452 { 00:23:20.452 "name": "nvme0", 00:23:20.452 "trtype": "tcp", 00:23:20.452 "traddr": "10.0.0.2", 00:23:20.452 "adrfam": "ipv4", 00:23:20.452 "trsvcid": "4420", 00:23:20.452 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:20.452 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:20.452 "prchk_reftag": false, 00:23:20.452 "prchk_guard": false, 00:23:20.452 "hdgst": false, 00:23:20.452 "ddgst": false, 00:23:20.452 "dhchap_key": "key3", 00:23:20.452 "allow_unrecognized_csi": false, 00:23:20.452 "method": "bdev_nvme_attach_controller", 00:23:20.452 "req_id": 1 00:23:20.452 } 00:23:20.452 Got JSON-RPC error response 00:23:20.452 response: 00:23:20.452 { 00:23:20.452 "code": -5, 00:23:20.452 "message": "Input/output error" 00:23:20.452 } 00:23:20.452 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:20.452 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:20.452 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:20.452 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:20.452 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:20.452 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:23:20.452 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:20.452 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:20.452 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:20.452 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:21.022 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:21.022 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.022 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.022 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.022 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:21.022 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.022 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.022 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.022 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:21.022 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:21.022 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:21.022 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:21.022 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:21.022 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:21.022 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:21.022 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:21.022 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:21.022 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:22.401 request: 00:23:22.401 { 00:23:22.401 "name": "nvme0", 00:23:22.401 "trtype": "tcp", 00:23:22.401 "traddr": "10.0.0.2", 00:23:22.401 "adrfam": "ipv4", 00:23:22.401 "trsvcid": "4420", 00:23:22.401 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:22.401 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:22.401 "prchk_reftag": false, 00:23:22.401 "prchk_guard": false, 00:23:22.401 "hdgst": false, 00:23:22.401 "ddgst": false, 00:23:22.401 "dhchap_key": "key0", 00:23:22.401 "dhchap_ctrlr_key": "key1", 00:23:22.401 "allow_unrecognized_csi": false, 00:23:22.402 "method": "bdev_nvme_attach_controller", 00:23:22.402 "req_id": 1 00:23:22.402 } 00:23:22.402 Got JSON-RPC error response 00:23:22.402 response: 00:23:22.402 { 00:23:22.402 "code": -5, 00:23:22.402 "message": "Input/output error" 00:23:22.402 } 00:23:22.402 15:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:22.402 15:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:22.402 15:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:22.402 15:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:22.402 15:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:23:22.402 15:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:22.402 15:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:22.661 nvme0n1 00:23:22.661 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:23:22.661 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:23:22.661 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:23.242 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.242 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:23.242 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:23.861 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:23:23.861 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.861 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.861 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.861 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:23.861 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:23.861 15:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:27.159 nvme0n1 00:23:27.159 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:23:27.159 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:23:27.159 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:27.729 15:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.729 15:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:27.729 15:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.729 15:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.729 15:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.729 15:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:23:27.729 15:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:23:27.729 15:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:28.668 15:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.668 15:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:23:28.668 15:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: --dhchap-ctrl-secret DHHC-1:03:ZTJjMGY4NGQ2MDViMGU1Yjk0MTU3NzZkNjM5ZTRhYTg5YTRlNzAyMjAwMjQ0OTQ5MTFiMjJjNzdhOGNkMWMwNIXYGkw=: 00:23:30.578 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:23:30.578 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:23:30.578 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:23:30.578 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:23:30.578 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:23:30.578 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:23:30.578 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:23:30.578 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:30.578 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:31.145 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:23:31.145 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:31.145 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:23:31.145 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:31.145 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:31.145 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:31.145 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:31.145 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:31.145 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:31.145 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:33.047 request: 00:23:33.047 { 00:23:33.047 "name": "nvme0", 00:23:33.047 "trtype": "tcp", 00:23:33.047 "traddr": "10.0.0.2", 00:23:33.047 "adrfam": "ipv4", 00:23:33.047 "trsvcid": "4420", 00:23:33.047 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:33.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:33.047 "prchk_reftag": false, 00:23:33.047 "prchk_guard": false, 00:23:33.047 "hdgst": false, 00:23:33.047 "ddgst": false, 00:23:33.047 "dhchap_key": "key1", 00:23:33.047 "allow_unrecognized_csi": false, 00:23:33.047 "method": "bdev_nvme_attach_controller", 00:23:33.047 "req_id": 1 00:23:33.047 } 00:23:33.047 Got JSON-RPC error response 00:23:33.047 response: 00:23:33.047 { 00:23:33.047 "code": -5, 00:23:33.047 "message": "Input/output error" 00:23:33.047 } 00:23:33.047 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:33.047 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:33.047 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:33.047 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:33.047 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:33.047 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:33.047 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:36.334 nvme0n1 00:23:36.334 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:23:36.334 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:23:36.334 15:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:36.593 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.593 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:36.593 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:36.853 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:36.853 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.853 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.853 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.853 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:23:36.853 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:36.853 15:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:37.422 nvme0n1 00:23:37.422 15:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:23:37.422 15:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:37.422 15:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:23:37.992 15:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.992 15:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:37.992 15:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:38.254 15:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:38.254 15:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.254 15:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.254 15:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.254 15:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: '' 2s 00:23:38.254 15:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:38.254 15:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:38.254 15:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: 00:23:38.254 15:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:23:38.254 15:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:38.254 15:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:38.254 15:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: ]] 00:23:38.254 15:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NDk5MGM1NTE5NWI1NWRhZjYwMTAxMzk3YzU0NTc3YjCVl+Uj: 00:23:38.254 15:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:23:38.254 15:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:38.254 15:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:40.162 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:23:40.162 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:23:40.162 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:23:40.162 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:23:40.162 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:23:40.162 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:23:40.162 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:23:40.162 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key key2 00:23:40.162 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.162 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.420 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.420 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: 2s 00:23:40.420 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:40.420 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:40.420 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:23:40.420 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: 00:23:40.420 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:40.420 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:40.420 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:23:40.420 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: ]] 00:23:40.420 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YTFhMjg1N2I1NTU2YmYyMmFjNjk4NDVlMWIzNmVlMDkzMTQ5MTEwNTQ4MTI4Yzc1ADLPDw==: 00:23:40.420 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:40.420 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:42.328 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:23:42.328 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:23:42.328 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:23:42.328 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:23:42.328 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:23:42.328 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:23:42.328 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:23:42.328 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:42.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:42.328 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:42.328 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.328 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.328 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.328 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:42.328 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:42.328 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:45.617 nvme0n1 00:23:45.617 15:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:45.617 15:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.617 15:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.617 15:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.617 15:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:45.617 15:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:46.994 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:23:46.994 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:23:46.994 15:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:47.563 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.563 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:47.563 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.563 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.563 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.563 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:23:47.563 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:23:47.823 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:23:47.823 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:23:47.823 15:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:48.392 15:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.392 15:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:48.392 15:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.392 15:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.392 15:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.393 15:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:48.393 15:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:48.393 15:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:48.393 15:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:23:48.393 15:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:48.393 15:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:23:48.393 15:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:48.393 15:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:48.393 15:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:50.932 request: 00:23:50.932 { 00:23:50.932 "name": "nvme0", 00:23:50.932 "dhchap_key": "key1", 00:23:50.932 "dhchap_ctrlr_key": "key3", 00:23:50.932 "method": "bdev_nvme_set_keys", 00:23:50.932 "req_id": 1 00:23:50.932 } 00:23:50.932 Got JSON-RPC error response 00:23:50.932 response: 00:23:50.932 { 00:23:50.932 "code": -13, 00:23:50.932 "message": "Permission denied" 00:23:50.932 } 00:23:50.932 15:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:50.932 15:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:50.932 15:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:50.932 15:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:50.932 15:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:50.932 15:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:50.932 15:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:51.192 15:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:23:51.192 15:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:23:52.133 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:52.133 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:52.133 15:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:52.701 15:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:23:52.701 15:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:52.701 15:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.701 15:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.967 15:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.967 15:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:52.967 15:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:52.967 15:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:56.265 nvme0n1 00:23:56.265 15:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:56.265 15:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.265 15:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.265 15:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.265 15:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:56.265 15:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:56.265 15:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:56.265 15:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:23:56.265 15:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:56.265 15:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:23:56.265 15:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:56.265 15:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:56.265 15:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:57.646 request: 00:23:57.646 { 00:23:57.646 "name": "nvme0", 00:23:57.646 "dhchap_key": "key2", 00:23:57.646 "dhchap_ctrlr_key": "key0", 00:23:57.646 "method": "bdev_nvme_set_keys", 00:23:57.646 "req_id": 1 00:23:57.646 } 00:23:57.646 Got JSON-RPC error response 00:23:57.646 response: 00:23:57.646 { 00:23:57.646 "code": -13, 00:23:57.646 "message": "Permission denied" 00:23:57.646 } 00:23:57.646 15:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:57.646 15:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:57.646 15:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:57.646 15:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:57.646 15:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:57.646 15:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:57.646 15:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:58.216 15:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:23:58.216 15:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:23:59.156 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:59.156 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:59.156 15:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:59.725 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:23:59.725 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:23:59.725 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:23:59.725 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3165707 00:23:59.725 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3165707 ']' 00:23:59.725 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3165707 00:23:59.725 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:59.725 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:59.725 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3165707 00:23:59.985 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:59.985 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:59.985 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3165707' 00:23:59.985 killing process with pid 3165707 00:23:59.985 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3165707 00:23:59.985 15:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3165707 00:24:00.555 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:24:00.555 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:00.555 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:24:00.555 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:00.555 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:24:00.555 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:00.555 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:00.555 rmmod nvme_tcp 00:24:00.555 rmmod nvme_fabrics 00:24:00.555 rmmod nvme_keyring 00:24:00.555 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:00.555 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:24:00.555 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:24:00.555 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3203278 ']' 00:24:00.555 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3203278 00:24:00.555 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3203278 ']' 00:24:00.555 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3203278 00:24:00.555 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:24:00.555 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:00.555 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3203278 00:24:00.555 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:00.555 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:00.555 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3203278' 00:24:00.555 killing process with pid 3203278 00:24:00.555 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3203278 00:24:00.555 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3203278 00:24:01.127 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:01.127 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:01.127 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:01.127 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:24:01.127 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:24:01.127 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:01.127 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:24:01.127 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:01.127 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:01.127 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.127 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:01.127 15:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.062 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:03.062 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.NGG /tmp/spdk.key-sha256.QHV /tmp/spdk.key-sha384.lJT /tmp/spdk.key-sha512.aV8 /tmp/spdk.key-sha512.nNK /tmp/spdk.key-sha384.HuT /tmp/spdk.key-sha256.ONt '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:24:03.062 00:24:03.062 real 6m16.362s 00:24:03.062 user 14m43.661s 00:24:03.062 sys 0m42.058s 00:24:03.062 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:03.062 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:03.062 ************************************ 00:24:03.062 END TEST nvmf_auth_target 00:24:03.062 ************************************ 00:24:03.062 15:19:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:24:03.062 15:19:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:03.062 15:19:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:24:03.062 15:19:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:03.062 15:19:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:03.062 ************************************ 00:24:03.062 START TEST nvmf_bdevio_no_huge 00:24:03.062 ************************************ 00:24:03.062 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:03.062 * Looking for test storage... 00:24:03.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:03.062 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:24:03.062 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1689 -- # lcov --version 00:24:03.062 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:24:03.321 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:24:03.321 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:03.321 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:03.321 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:03.321 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:24:03.321 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:24:03.321 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:24:03.321 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:24:03.322 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:24:03.322 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:24:03.322 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:24:03.322 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:03.322 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:24:03.322 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:24:03.322 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:03.322 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:03.322 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:24:03.322 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:24:03.322 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:03.322 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:24:03.322 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:24:03.322 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:24:03.322 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:24:03.322 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:03.322 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:24:03.322 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:24:03.322 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:03.322 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:03.322 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:24:03.322 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:03.322 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:24:03.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.322 --rc genhtml_branch_coverage=1 00:24:03.322 --rc genhtml_function_coverage=1 00:24:03.322 --rc genhtml_legend=1 00:24:03.322 --rc geninfo_all_blocks=1 00:24:03.322 --rc geninfo_unexecuted_blocks=1 00:24:03.322 00:24:03.322 ' 00:24:03.322 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:24:03.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.322 --rc genhtml_branch_coverage=1 00:24:03.322 --rc genhtml_function_coverage=1 00:24:03.322 --rc genhtml_legend=1 00:24:03.322 --rc geninfo_all_blocks=1 00:24:03.322 --rc geninfo_unexecuted_blocks=1 00:24:03.322 00:24:03.322 ' 00:24:03.322 15:19:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:24:03.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.322 --rc genhtml_branch_coverage=1 00:24:03.322 --rc genhtml_function_coverage=1 00:24:03.322 --rc genhtml_legend=1 00:24:03.322 --rc geninfo_all_blocks=1 00:24:03.322 --rc geninfo_unexecuted_blocks=1 00:24:03.322 00:24:03.322 ' 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:24:03.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.322 --rc genhtml_branch_coverage=1 00:24:03.322 --rc genhtml_function_coverage=1 00:24:03.322 --rc genhtml_legend=1 00:24:03.322 --rc geninfo_all_blocks=1 00:24:03.322 --rc geninfo_unexecuted_blocks=1 00:24:03.322 00:24:03.322 ' 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:03.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:03.322 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:24:03.323 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:03.323 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:03.323 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:03.323 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:03.323 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:03.323 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.323 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.323 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.323 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:03.323 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:03.323 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:24:03.323 15:19:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:06.610 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:06.610 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:24:06.610 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:06.610 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:06.610 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:06.610 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:06.610 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:06.610 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:24:06.610 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:06.610 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:24:06.610 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:24:06.610 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:06.611 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:06.611 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:06.611 Found net devices under 0000:84:00.0: cvl_0_0 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:06.611 Found net devices under 0000:84:00.1: cvl_0_1 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:06.611 15:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:06.611 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:06.611 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:06.611 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:06.611 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:06.611 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:06.611 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:06.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:06.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:24:06.611 00:24:06.611 --- 10.0.0.2 ping statistics --- 00:24:06.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.611 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:24:06.611 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:06.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:06.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:24:06.611 00:24:06.611 --- 10.0.0.1 ping statistics --- 00:24:06.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.611 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:24:06.611 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:06.611 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:24:06.611 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:06.611 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:06.611 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:06.611 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:06.611 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:06.611 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:06.611 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:06.611 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:24:06.611 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:06.611 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:06.612 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:06.612 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3210909 00:24:06.612 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:24:06.612 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3210909 00:24:06.612 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 3210909 ']' 00:24:06.612 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.612 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:06.612 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.612 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:06.612 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:06.612 [2024-10-28 15:19:53.196022] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:24:06.612 [2024-10-28 15:19:53.196111] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:24:06.612 [2024-10-28 15:19:53.290379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:06.612 [2024-10-28 15:19:53.359328] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.612 [2024-10-28 15:19:53.359398] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.612 [2024-10-28 15:19:53.359415] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:06.612 [2024-10-28 15:19:53.359429] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:06.612 [2024-10-28 15:19:53.359442] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.612 [2024-10-28 15:19:53.360687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:06.612 [2024-10-28 15:19:53.360743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:24:06.612 [2024-10-28 15:19:53.360796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:24:06.612 [2024-10-28 15:19:53.360799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:06.870 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:06.870 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:24:06.870 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:06.870 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:06.870 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:06.870 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:06.870 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:06.870 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.870 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:06.870 [2024-10-28 15:19:53.621272] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.870 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.870 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:06.870 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.870 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:06.870 Malloc0 00:24:06.870 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.870 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:06.870 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.870 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:06.871 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.871 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:06.871 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.871 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:06.871 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.871 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:06.871 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.871 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:06.871 [2024-10-28 15:19:53.661635] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:06.871 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.871 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:24:06.871 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:24:06.871 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:24:06.871 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:24:06.871 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:06.871 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:06.871 { 00:24:06.871 "params": { 00:24:06.871 "name": "Nvme$subsystem", 00:24:06.871 "trtype": "$TEST_TRANSPORT", 00:24:06.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.871 "adrfam": "ipv4", 00:24:06.871 "trsvcid": "$NVMF_PORT", 00:24:06.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.871 "hdgst": ${hdgst:-false}, 00:24:06.871 "ddgst": ${ddgst:-false} 00:24:06.871 }, 00:24:06.871 "method": "bdev_nvme_attach_controller" 00:24:06.871 } 00:24:06.871 EOF 00:24:06.871 )") 00:24:06.871 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:24:06.871 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:24:06.871 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:24:06.871 15:19:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:06.871 "params": { 00:24:06.871 "name": "Nvme1", 00:24:06.871 "trtype": "tcp", 00:24:06.871 "traddr": "10.0.0.2", 00:24:06.871 "adrfam": "ipv4", 00:24:06.871 "trsvcid": "4420", 00:24:06.871 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.871 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:06.871 "hdgst": false, 00:24:06.871 "ddgst": false 00:24:06.871 }, 00:24:06.871 "method": "bdev_nvme_attach_controller" 00:24:06.871 }' 00:24:06.871 [2024-10-28 15:19:53.714217] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:24:06.871 [2024-10-28 15:19:53.714304] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3210941 ] 00:24:07.129 [2024-10-28 15:19:53.798939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:07.129 [2024-10-28 15:19:53.865516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.129 [2024-10-28 15:19:53.865568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:07.129 [2024-10-28 15:19:53.865571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.388 I/O targets: 00:24:07.388 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:24:07.388 00:24:07.388 00:24:07.388 CUnit - A unit testing framework for C - Version 2.1-3 00:24:07.388 http://cunit.sourceforge.net/ 00:24:07.388 00:24:07.388 00:24:07.388 Suite: bdevio tests on: Nvme1n1 00:24:07.388 Test: blockdev write read block ...passed 00:24:07.388 Test: blockdev write zeroes read block ...passed 00:24:07.388 Test: blockdev write zeroes read no split ...passed 00:24:07.388 Test: blockdev write zeroes read split ...passed 00:24:07.388 Test: blockdev write zeroes read split partial ...passed 00:24:07.388 Test: blockdev reset ...[2024-10-28 15:19:54.220311] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:07.388 [2024-10-28 15:19:54.220428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127fcc0 (9): Bad file descriptor 00:24:07.388 [2024-10-28 15:19:54.234502] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:24:07.388 passed 00:24:07.388 Test: blockdev write read 8 blocks ...passed 00:24:07.388 Test: blockdev write read size > 128k ...passed 00:24:07.388 Test: blockdev write read invalid size ...passed 00:24:07.645 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:07.645 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:07.645 Test: blockdev write read max offset ...passed 00:24:07.645 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:07.645 Test: blockdev writev readv 8 blocks ...passed 00:24:07.645 Test: blockdev writev readv 30 x 1block ...passed 00:24:07.645 Test: blockdev writev readv block ...passed 00:24:07.645 Test: blockdev writev readv size > 128k ...passed 00:24:07.903 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:07.903 Test: blockdev comparev and writev ...[2024-10-28 15:19:54.532151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:07.903 [2024-10-28 15:19:54.532191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.903 [2024-10-28 15:19:54.532216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:07.903 [2024-10-28 15:19:54.532234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:07.903 [2024-10-28 15:19:54.532706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:07.903 [2024-10-28 15:19:54.532733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:07.903 [2024-10-28 15:19:54.532756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:07.903 [2024-10-28 15:19:54.532774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:07.903 [2024-10-28 15:19:54.533222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:07.903 [2024-10-28 15:19:54.533253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:07.903 [2024-10-28 15:19:54.533277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:07.903 [2024-10-28 15:19:54.533294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:07.903 [2024-10-28 15:19:54.533764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:07.903 [2024-10-28 15:19:54.533789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:07.903 [2024-10-28 15:19:54.533811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:07.903 [2024-10-28 15:19:54.533828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:07.903 passed 00:24:07.903 Test: blockdev nvme passthru rw ...passed 00:24:07.903 Test: blockdev nvme passthru vendor specific ...[2024-10-28 15:19:54.616143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:07.903 [2024-10-28 15:19:54.616172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:07.903 [2024-10-28 15:19:54.616401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:07.903 [2024-10-28 15:19:54.616425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:07.903 [2024-10-28 15:19:54.616565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:07.903 [2024-10-28 15:19:54.616590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:07.903 [2024-10-28 15:19:54.616739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:07.903 [2024-10-28 15:19:54.616763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:07.903 passed 00:24:07.903 Test: blockdev nvme admin passthru ...passed 00:24:07.903 Test: blockdev copy ...passed 00:24:07.903 00:24:07.903 Run Summary: Type Total Ran Passed Failed Inactive 00:24:07.903 suites 1 1 n/a 0 0 00:24:07.903 tests 23 23 23 0 0 00:24:07.903 asserts 152 152 152 0 n/a 00:24:07.903 00:24:07.903 Elapsed time = 1.158 seconds 00:24:08.470 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:08.470 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.470 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:08.470 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.470 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:24:08.470 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:24:08.470 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:08.470 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:24:08.470 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:08.470 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:24:08.470 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:08.470 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:08.470 rmmod nvme_tcp 00:24:08.470 rmmod nvme_fabrics 00:24:08.470 rmmod nvme_keyring 00:24:08.470 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:08.470 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:24:08.470 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:24:08.470 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3210909 ']' 00:24:08.470 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3210909 00:24:08.470 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 3210909 ']' 00:24:08.470 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 3210909 00:24:08.470 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:24:08.470 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:08.470 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3210909 00:24:08.470 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:24:08.470 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:24:08.471 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3210909' 00:24:08.471 killing process with pid 3210909 00:24:08.471 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 3210909 00:24:08.471 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 3210909 00:24:08.730 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:08.730 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:08.730 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:08.730 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:24:08.730 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:24:08.730 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:08.730 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:24:08.730 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:08.731 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:08.731 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.731 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.731 15:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:11.275 00:24:11.275 real 0m7.819s 00:24:11.275 user 0m11.941s 00:24:11.275 sys 0m3.458s 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:11.275 ************************************ 00:24:11.275 END TEST nvmf_bdevio_no_huge 00:24:11.275 ************************************ 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:11.275 ************************************ 00:24:11.275 START TEST nvmf_tls 00:24:11.275 ************************************ 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:11.275 * Looking for test storage... 00:24:11.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1689 -- # lcov --version 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:24:11.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.275 --rc genhtml_branch_coverage=1 00:24:11.275 --rc genhtml_function_coverage=1 00:24:11.275 --rc genhtml_legend=1 00:24:11.275 --rc geninfo_all_blocks=1 00:24:11.275 --rc geninfo_unexecuted_blocks=1 00:24:11.275 00:24:11.275 ' 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:24:11.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.275 --rc genhtml_branch_coverage=1 00:24:11.275 --rc genhtml_function_coverage=1 00:24:11.275 --rc genhtml_legend=1 00:24:11.275 --rc geninfo_all_blocks=1 00:24:11.275 --rc geninfo_unexecuted_blocks=1 00:24:11.275 00:24:11.275 ' 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:24:11.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.275 --rc genhtml_branch_coverage=1 00:24:11.275 --rc genhtml_function_coverage=1 00:24:11.275 --rc genhtml_legend=1 00:24:11.275 --rc geninfo_all_blocks=1 00:24:11.275 --rc geninfo_unexecuted_blocks=1 00:24:11.275 00:24:11.275 ' 00:24:11.275 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:24:11.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.275 --rc genhtml_branch_coverage=1 00:24:11.275 --rc genhtml_function_coverage=1 00:24:11.275 --rc genhtml_legend=1 00:24:11.275 --rc geninfo_all_blocks=1 00:24:11.275 --rc geninfo_unexecuted_blocks=1 00:24:11.275 00:24:11.275 ' 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:11.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:24:11.276 15:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.568 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:14.568 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:24:14.568 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:14.568 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:14.568 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:14.569 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:14.569 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:14.569 Found net devices under 0000:84:00.0: cvl_0_0 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:14.569 Found net devices under 0000:84:00.1: cvl_0_1 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:14.569 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:14.570 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:14.570 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:14.570 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:14.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:14.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:24:14.570 00:24:14.570 --- 10.0.0.2 ping statistics --- 00:24:14.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.570 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:24:14.570 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:14.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:14.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:24:14.570 00:24:14.570 --- 10.0.0.1 ping statistics --- 00:24:14.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.570 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:24:14.570 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:14.570 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:24:14.570 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:14.570 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:14.570 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:14.570 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:14.570 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:14.570 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:14.570 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:14.570 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:24:14.570 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:14.570 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:14.570 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.570 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3213161 00:24:14.570 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:24:14.570 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3213161 00:24:14.570 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3213161 ']' 00:24:14.570 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.570 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:14.570 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.570 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:14.570 15:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.570 [2024-10-28 15:20:00.990054] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:24:14.570 [2024-10-28 15:20:00.990167] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.570 [2024-10-28 15:20:01.137596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.570 [2024-10-28 15:20:01.254953] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:14.570 [2024-10-28 15:20:01.255059] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:14.570 [2024-10-28 15:20:01.255096] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:14.570 [2024-10-28 15:20:01.255127] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:14.570 [2024-10-28 15:20:01.255154] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:14.570 [2024-10-28 15:20:01.256562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.828 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:14.828 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:14.828 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:14.828 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:14.828 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.828 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:14.828 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:24:14.828 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:24:15.086 true 00:24:15.086 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:15.086 15:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:24:15.345 15:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:24:15.345 15:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:24:15.345 15:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:15.911 15:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:15.911 15:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:24:16.477 15:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:24:16.477 15:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:24:16.477 15:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:24:17.045 15:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:17.045 15:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:24:17.612 15:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:24:17.612 15:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:24:17.612 15:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:17.612 15:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:24:18.549 15:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:24:18.549 15:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:24:18.549 15:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:24:18.807 15:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:18.807 15:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:24:19.066 15:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:24:19.066 15:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:24:19.066 15:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:24:19.633 15:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:19.633 15:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:24:20.201 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:24:20.201 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:24:20.201 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:24:20.201 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:24:20.201 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:24:20.201 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:20.201 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:24:20.201 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:24:20.201 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:24:20.459 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:20.460 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:24:20.460 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:24:20.460 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:24:20.460 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:20.460 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:24:20.460 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:24:20.460 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:24:20.460 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:20.460 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:24:20.460 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.iqI9e4hULz 00:24:20.460 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:24:20.460 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.hpMpjg35m9 00:24:20.460 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:20.460 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:20.460 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.iqI9e4hULz 00:24:20.460 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.hpMpjg35m9 00:24:20.460 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:20.717 15:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:24:21.283 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.iqI9e4hULz 00:24:21.283 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.iqI9e4hULz 00:24:21.283 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:21.541 [2024-10-28 15:20:08.333310] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.541 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:22.106 15:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:22.672 [2024-10-28 15:20:09.328428] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:22.672 [2024-10-28 15:20:09.328948] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:22.672 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:22.932 malloc0 00:24:22.932 15:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:23.864 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.iqI9e4hULz 00:24:24.123 15:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:24.689 15:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.iqI9e4hULz 00:24:34.666 Initializing NVMe Controllers 00:24:34.666 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:34.666 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:34.666 Initialization complete. Launching workers. 00:24:34.666 ======================================================== 00:24:34.666 Latency(us) 00:24:34.666 Device Information : IOPS MiB/s Average min max 00:24:34.666 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4241.19 16.57 15102.23 1509.41 21223.40 00:24:34.666 ======================================================== 00:24:34.666 Total : 4241.19 16.57 15102.23 1509.41 21223.40 00:24:34.666 00:24:34.666 15:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iqI9e4hULz 00:24:34.666 15:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:34.666 15:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:34.666 15:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:34.666 15:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.iqI9e4hULz 00:24:34.666 15:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:34.666 15:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3215586 00:24:34.666 15:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:34.666 15:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:34.666 15:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3215586 /var/tmp/bdevperf.sock 00:24:34.666 15:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3215586 ']' 00:24:34.666 15:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:34.666 15:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:34.666 15:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:34.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:34.666 15:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:34.666 15:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.926 [2024-10-28 15:20:21.544596] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:24:34.926 [2024-10-28 15:20:21.544799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3215586 ] 00:24:34.926 [2024-10-28 15:20:21.705338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.185 [2024-10-28 15:20:21.820568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:35.444 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:35.444 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:35.444 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iqI9e4hULz 00:24:35.703 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:36.271 [2024-10-28 15:20:22.829766] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:36.271 TLSTESTn1 00:24:36.271 15:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:36.271 Running I/O for 10 seconds... 00:24:38.582 1546.00 IOPS, 6.04 MiB/s [2024-10-28T14:20:26.382Z] 2024.00 IOPS, 7.91 MiB/s [2024-10-28T14:20:27.317Z] 2025.00 IOPS, 7.91 MiB/s [2024-10-28T14:20:28.255Z] 2214.25 IOPS, 8.65 MiB/s [2024-10-28T14:20:29.191Z] 2111.60 IOPS, 8.25 MiB/s [2024-10-28T14:20:30.207Z] 2242.50 IOPS, 8.76 MiB/s [2024-10-28T14:20:31.176Z] 2140.29 IOPS, 8.36 MiB/s [2024-10-28T14:20:32.116Z] 2077.38 IOPS, 8.11 MiB/s [2024-10-28T14:20:33.500Z] 2048.67 IOPS, 8.00 MiB/s [2024-10-28T14:20:33.500Z] 2104.40 IOPS, 8.22 MiB/s 00:24:46.633 Latency(us) 00:24:46.633 [2024-10-28T14:20:33.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.633 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:46.633 Verification LBA range: start 0x0 length 0x2000 00:24:46.633 TLSTESTn1 : 10.05 2107.13 8.23 0.00 0.00 60578.99 12718.84 57089.14 00:24:46.633 [2024-10-28T14:20:33.500Z] =================================================================================================================== 00:24:46.633 [2024-10-28T14:20:33.500Z] Total : 2107.13 8.23 0.00 0.00 60578.99 12718.84 57089.14 00:24:46.633 { 00:24:46.633 "results": [ 00:24:46.633 { 00:24:46.633 "job": "TLSTESTn1", 00:24:46.633 "core_mask": "0x4", 00:24:46.633 "workload": "verify", 00:24:46.633 "status": "finished", 00:24:46.633 "verify_range": { 00:24:46.633 "start": 0, 00:24:46.633 "length": 8192 00:24:46.633 }, 00:24:46.633 "queue_depth": 128, 00:24:46.633 "io_size": 4096, 00:24:46.633 "runtime": 10.047809, 00:24:46.633 "iops": 2107.126041110057, 00:24:46.633 "mibps": 8.23096109808616, 00:24:46.633 "io_failed": 0, 00:24:46.633 "io_timeout": 0, 00:24:46.633 "avg_latency_us": 60578.9867562329, 00:24:46.633 "min_latency_us": 12718.838518518518, 00:24:46.633 "max_latency_us": 57089.137777777774 00:24:46.633 } 00:24:46.633 ], 00:24:46.633 "core_count": 1 00:24:46.633 } 00:24:46.633 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:46.633 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3215586 00:24:46.633 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3215586 ']' 00:24:46.633 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3215586 00:24:46.633 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:46.633 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:46.633 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3215586 00:24:46.633 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:46.633 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:46.633 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3215586' 00:24:46.633 killing process with pid 3215586 00:24:46.633 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3215586 00:24:46.633 Received shutdown signal, test time was about 10.000000 seconds 00:24:46.633 00:24:46.633 Latency(us) 00:24:46.633 [2024-10-28T14:20:33.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.633 [2024-10-28T14:20:33.500Z] =================================================================================================================== 00:24:46.633 [2024-10-28T14:20:33.500Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:46.633 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3215586 00:24:46.894 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hpMpjg35m9 00:24:46.894 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:46.894 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hpMpjg35m9 00:24:46.894 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:46.894 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:46.894 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:46.894 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:46.894 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hpMpjg35m9 00:24:46.894 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:46.894 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:46.894 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:46.894 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hpMpjg35m9 00:24:46.894 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:46.894 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3216914 00:24:46.894 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:46.894 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:46.894 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3216914 /var/tmp/bdevperf.sock 00:24:46.894 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3216914 ']' 00:24:46.894 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:46.894 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:46.894 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:46.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:46.894 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:46.894 15:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:46.894 [2024-10-28 15:20:33.686880] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:24:46.894 [2024-10-28 15:20:33.687066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3216914 ] 00:24:47.153 [2024-10-28 15:20:33.832743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.153 [2024-10-28 15:20:33.899409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.411 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:47.411 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:47.411 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hpMpjg35m9 00:24:47.978 15:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:48.547 [2024-10-28 15:20:35.180614] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:48.547 [2024-10-28 15:20:35.192435] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:48.547 [2024-10-28 15:20:35.192846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da6920 (107): Transport endpoint is not connected 00:24:48.547 [2024-10-28 15:20:35.193840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da6920 (9): Bad file descriptor 00:24:48.547 [2024-10-28 15:20:35.194833] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:24:48.547 [2024-10-28 15:20:35.194886] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:48.547 [2024-10-28 15:20:35.194945] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:48.547 [2024-10-28 15:20:35.194996] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:24:48.547 request: 00:24:48.547 { 00:24:48.547 "name": "TLSTEST", 00:24:48.547 "trtype": "tcp", 00:24:48.547 "traddr": "10.0.0.2", 00:24:48.547 "adrfam": "ipv4", 00:24:48.547 "trsvcid": "4420", 00:24:48.547 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.547 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:48.547 "prchk_reftag": false, 00:24:48.547 "prchk_guard": false, 00:24:48.547 "hdgst": false, 00:24:48.547 "ddgst": false, 00:24:48.547 "psk": "key0", 00:24:48.547 "allow_unrecognized_csi": false, 00:24:48.547 "method": "bdev_nvme_attach_controller", 00:24:48.547 "req_id": 1 00:24:48.547 } 00:24:48.547 Got JSON-RPC error response 00:24:48.547 response: 00:24:48.547 { 00:24:48.547 "code": -5, 00:24:48.547 "message": "Input/output error" 00:24:48.547 } 00:24:48.547 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3216914 00:24:48.547 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3216914 ']' 00:24:48.547 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3216914 00:24:48.547 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:48.547 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:48.547 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3216914 00:24:48.547 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:48.547 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:48.547 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3216914' 00:24:48.547 killing process with pid 3216914 00:24:48.547 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3216914 00:24:48.547 Received shutdown signal, test time was about 10.000000 seconds 00:24:48.547 00:24:48.547 Latency(us) 00:24:48.547 [2024-10-28T14:20:35.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.547 [2024-10-28T14:20:35.414Z] =================================================================================================================== 00:24:48.547 [2024-10-28T14:20:35.414Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:48.547 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3216914 00:24:48.806 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:48.806 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:48.806 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:48.806 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:48.806 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:48.806 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.iqI9e4hULz 00:24:48.806 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:48.806 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.iqI9e4hULz 00:24:48.806 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:48.806 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:48.806 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:48.806 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:48.806 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.iqI9e4hULz 00:24:48.806 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:48.806 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:48.806 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:24:48.806 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.iqI9e4hULz 00:24:48.806 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:48.806 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3217181 00:24:48.806 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:48.806 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:48.806 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3217181 /var/tmp/bdevperf.sock 00:24:48.806 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3217181 ']' 00:24:48.806 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:48.806 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:48.806 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:48.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:48.806 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:48.806 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.806 [2024-10-28 15:20:35.638272] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:24:48.806 [2024-10-28 15:20:35.638381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3217181 ] 00:24:49.064 [2024-10-28 15:20:35.717392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.064 [2024-10-28 15:20:35.782781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:49.064 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:49.064 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:49.064 15:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iqI9e4hULz 00:24:49.631 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:24:49.891 [2024-10-28 15:20:36.647451] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:49.891 [2024-10-28 15:20:36.657331] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:49.891 [2024-10-28 15:20:36.657409] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:49.891 [2024-10-28 15:20:36.657510] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:49.891 [2024-10-28 15:20:36.657559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c5920 (107): Transport endpoint is not connected 00:24:49.891 [2024-10-28 15:20:36.658526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c5920 (9): Bad file descriptor 00:24:49.891 [2024-10-28 15:20:36.659520] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:24:49.891 [2024-10-28 15:20:36.659578] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:49.891 [2024-10-28 15:20:36.659617] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:49.891 [2024-10-28 15:20:36.659692] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:24:49.891 request: 00:24:49.891 { 00:24:49.891 "name": "TLSTEST", 00:24:49.891 "trtype": "tcp", 00:24:49.891 "traddr": "10.0.0.2", 00:24:49.891 "adrfam": "ipv4", 00:24:49.891 "trsvcid": "4420", 00:24:49.891 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:49.891 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:49.891 "prchk_reftag": false, 00:24:49.891 "prchk_guard": false, 00:24:49.891 "hdgst": false, 00:24:49.891 "ddgst": false, 00:24:49.891 "psk": "key0", 00:24:49.891 "allow_unrecognized_csi": false, 00:24:49.891 "method": "bdev_nvme_attach_controller", 00:24:49.891 "req_id": 1 00:24:49.891 } 00:24:49.891 Got JSON-RPC error response 00:24:49.891 response: 00:24:49.891 { 00:24:49.891 "code": -5, 00:24:49.891 "message": "Input/output error" 00:24:49.891 } 00:24:49.891 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3217181 00:24:49.891 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3217181 ']' 00:24:49.891 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3217181 00:24:49.891 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:49.891 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:49.891 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3217181 00:24:49.891 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:49.891 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:49.891 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3217181' 00:24:49.891 killing process with pid 3217181 00:24:49.891 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3217181 00:24:49.891 Received shutdown signal, test time was about 10.000000 seconds 00:24:49.891 00:24:49.891 Latency(us) 00:24:49.891 [2024-10-28T14:20:36.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.891 [2024-10-28T14:20:36.758Z] =================================================================================================================== 00:24:49.891 [2024-10-28T14:20:36.758Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:49.891 15:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3217181 00:24:50.462 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:50.462 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:50.462 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:50.462 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:50.462 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:50.462 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.iqI9e4hULz 00:24:50.462 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:50.462 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.iqI9e4hULz 00:24:50.462 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:50.463 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:50.463 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:50.463 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:50.463 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.iqI9e4hULz 00:24:50.463 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:50.463 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:24:50.463 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:50.463 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.iqI9e4hULz 00:24:50.463 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:50.463 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3217326 00:24:50.463 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:50.463 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:50.463 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3217326 /var/tmp/bdevperf.sock 00:24:50.463 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3217326 ']' 00:24:50.463 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:50.463 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:50.463 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:50.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:50.463 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:50.463 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:50.463 [2024-10-28 15:20:37.106392] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:24:50.463 [2024-10-28 15:20:37.106508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3217326 ] 00:24:50.463 [2024-10-28 15:20:37.243010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.724 [2024-10-28 15:20:37.358774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:50.724 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:50.724 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:50.724 15:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iqI9e4hULz 00:24:51.664 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:51.664 [2024-10-28 15:20:38.528015] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:51.924 [2024-10-28 15:20:38.541167] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:51.924 [2024-10-28 15:20:38.541244] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:51.924 [2024-10-28 15:20:38.541337] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:51.924 [2024-10-28 15:20:38.542050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1956920 (107): Transport endpoint is not connected 00:24:51.924 [2024-10-28 15:20:38.543023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1956920 (9): Bad file descriptor 00:24:51.924 [2024-10-28 15:20:38.544016] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:24:51.924 [2024-10-28 15:20:38.544071] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:51.924 [2024-10-28 15:20:38.544105] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:24:51.924 [2024-10-28 15:20:38.544153] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:24:51.924 request: 00:24:51.924 { 00:24:51.924 "name": "TLSTEST", 00:24:51.924 "trtype": "tcp", 00:24:51.924 "traddr": "10.0.0.2", 00:24:51.924 "adrfam": "ipv4", 00:24:51.924 "trsvcid": "4420", 00:24:51.924 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:51.924 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:51.924 "prchk_reftag": false, 00:24:51.924 "prchk_guard": false, 00:24:51.924 "hdgst": false, 00:24:51.924 "ddgst": false, 00:24:51.924 "psk": "key0", 00:24:51.924 "allow_unrecognized_csi": false, 00:24:51.924 "method": "bdev_nvme_attach_controller", 00:24:51.924 "req_id": 1 00:24:51.924 } 00:24:51.924 Got JSON-RPC error response 00:24:51.924 response: 00:24:51.924 { 00:24:51.924 "code": -5, 00:24:51.924 "message": "Input/output error" 00:24:51.924 } 00:24:51.924 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3217326 00:24:51.924 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3217326 ']' 00:24:51.924 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3217326 00:24:51.924 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:51.924 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:51.924 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3217326 00:24:51.924 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:51.924 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:51.924 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3217326' 00:24:51.924 killing process with pid 3217326 00:24:51.924 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3217326 00:24:51.924 Received shutdown signal, test time was about 10.000000 seconds 00:24:51.924 00:24:51.924 Latency(us) 00:24:51.924 [2024-10-28T14:20:38.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.924 [2024-10-28T14:20:38.791Z] =================================================================================================================== 00:24:51.924 [2024-10-28T14:20:38.791Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:51.924 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3217326 00:24:52.184 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:52.184 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:52.184 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:52.184 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:52.184 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:52.184 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:52.184 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:52.184 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:52.184 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:52.184 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:52.184 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:52.184 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:52.184 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:52.184 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:52.184 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:52.184 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:52.184 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:24:52.184 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:52.184 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3217589 00:24:52.184 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:52.184 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:52.184 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3217589 /var/tmp/bdevperf.sock 00:24:52.184 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3217589 ']' 00:24:52.184 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:52.184 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:52.184 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:52.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:52.184 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:52.184 15:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:52.184 [2024-10-28 15:20:39.032352] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:24:52.184 [2024-10-28 15:20:39.032458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3217589 ] 00:24:52.445 [2024-10-28 15:20:39.141446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.445 [2024-10-28 15:20:39.263766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:52.706 15:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:52.706 15:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:52.706 15:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:24:52.968 [2024-10-28 15:20:39.755719] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:24:52.968 [2024-10-28 15:20:39.755815] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:52.968 request: 00:24:52.968 { 00:24:52.968 "name": "key0", 00:24:52.968 "path": "", 00:24:52.968 "method": "keyring_file_add_key", 00:24:52.968 "req_id": 1 00:24:52.968 } 00:24:52.968 Got JSON-RPC error response 00:24:52.968 response: 00:24:52.968 { 00:24:52.968 "code": -1, 00:24:52.968 "message": "Operation not permitted" 00:24:52.968 } 00:24:52.968 15:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:53.227 [2024-10-28 15:20:40.080800] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:53.227 [2024-10-28 15:20:40.080897] bdev_nvme.c:6529:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:53.227 request: 00:24:53.227 { 00:24:53.227 "name": "TLSTEST", 00:24:53.227 "trtype": "tcp", 00:24:53.227 "traddr": "10.0.0.2", 00:24:53.227 "adrfam": "ipv4", 00:24:53.227 "trsvcid": "4420", 00:24:53.227 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:53.227 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:53.227 "prchk_reftag": false, 00:24:53.227 "prchk_guard": false, 00:24:53.227 "hdgst": false, 00:24:53.227 "ddgst": false, 00:24:53.227 "psk": "key0", 00:24:53.227 "allow_unrecognized_csi": false, 00:24:53.227 "method": "bdev_nvme_attach_controller", 00:24:53.227 "req_id": 1 00:24:53.227 } 00:24:53.227 Got JSON-RPC error response 00:24:53.227 response: 00:24:53.227 { 00:24:53.227 "code": -126, 00:24:53.227 "message": "Required key not available" 00:24:53.227 } 00:24:53.487 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3217589 00:24:53.487 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3217589 ']' 00:24:53.487 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3217589 00:24:53.487 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:53.487 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:53.487 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3217589 00:24:53.487 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:53.487 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:53.487 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3217589' 00:24:53.487 killing process with pid 3217589 00:24:53.487 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3217589 00:24:53.487 Received shutdown signal, test time was about 10.000000 seconds 00:24:53.487 00:24:53.487 Latency(us) 00:24:53.487 [2024-10-28T14:20:40.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.487 [2024-10-28T14:20:40.354Z] =================================================================================================================== 00:24:53.487 [2024-10-28T14:20:40.354Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:53.487 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3217589 00:24:53.746 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:53.746 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:53.746 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:53.746 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:53.746 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:53.746 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3213161 00:24:53.746 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3213161 ']' 00:24:53.746 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3213161 00:24:53.746 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:53.746 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:53.746 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3213161 00:24:53.746 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:53.746 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:53.746 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3213161' 00:24:53.746 killing process with pid 3213161 00:24:53.746 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3213161 00:24:53.746 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3213161 00:24:54.006 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:54.006 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:54.006 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:24:54.006 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:54.006 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:54.006 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:24:54.006 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:24:54.268 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:54.268 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:24:54.268 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.5mD0rT7vYi 00:24:54.268 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:54.268 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.5mD0rT7vYi 00:24:54.268 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:24:54.268 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:54.268 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:54.268 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:54.268 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3217868 00:24:54.268 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:54.268 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3217868 00:24:54.268 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3217868 ']' 00:24:54.268 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.268 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:54.268 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.268 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:54.268 15:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:54.268 [2024-10-28 15:20:41.032774] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:24:54.268 [2024-10-28 15:20:41.032870] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.529 [2024-10-28 15:20:41.165976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.529 [2024-10-28 15:20:41.285684] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.529 [2024-10-28 15:20:41.285793] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.529 [2024-10-28 15:20:41.285829] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:54.529 [2024-10-28 15:20:41.285859] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:54.529 [2024-10-28 15:20:41.285884] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.529 [2024-10-28 15:20:41.287269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.789 15:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:54.789 15:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:54.789 15:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:54.789 15:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:54.789 15:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:54.789 15:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:54.789 15:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.5mD0rT7vYi 00:24:54.789 15:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.5mD0rT7vYi 00:24:54.789 15:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:55.048 [2024-10-28 15:20:41.905715] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:55.308 15:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:55.569 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:55.829 [2024-10-28 15:20:42.676158] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:55.829 [2024-10-28 15:20:42.676628] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:56.090 15:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:56.661 malloc0 00:24:56.661 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:56.922 15:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.5mD0rT7vYi 00:24:57.182 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:57.752 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5mD0rT7vYi 00:24:57.752 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:57.752 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:57.752 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:57.752 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.5mD0rT7vYi 00:24:57.752 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:57.752 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3218250 00:24:57.752 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:57.752 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:57.752 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3218250 /var/tmp/bdevperf.sock 00:24:57.752 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3218250 ']' 00:24:57.752 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:57.752 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:57.752 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:57.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:57.752 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:57.752 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:57.752 [2024-10-28 15:20:44.517534] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:24:57.752 [2024-10-28 15:20:44.517739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3218250 ] 00:24:58.012 [2024-10-28 15:20:44.629763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.012 [2024-10-28 15:20:44.706670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:58.270 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:58.270 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:58.270 15:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5mD0rT7vYi 00:24:58.529 15:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:58.790 [2024-10-28 15:20:45.631071] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:59.051 TLSTESTn1 00:24:59.051 15:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:59.051 Running I/O for 10 seconds... 00:25:01.372 1626.00 IOPS, 6.35 MiB/s [2024-10-28T14:20:49.175Z] 1554.50 IOPS, 6.07 MiB/s [2024-10-28T14:20:50.114Z] 1742.00 IOPS, 6.80 MiB/s [2024-10-28T14:20:51.050Z] 1814.50 IOPS, 7.09 MiB/s [2024-10-28T14:20:51.989Z] 1885.60 IOPS, 7.37 MiB/s [2024-10-28T14:20:52.930Z] 1913.67 IOPS, 7.48 MiB/s [2024-10-28T14:20:54.310Z] 1850.86 IOPS, 7.23 MiB/s [2024-10-28T14:20:55.244Z] 1804.25 IOPS, 7.05 MiB/s [2024-10-28T14:20:56.181Z] 1822.00 IOPS, 7.12 MiB/s [2024-10-28T14:20:56.181Z] 1844.40 IOPS, 7.20 MiB/s 00:25:09.314 Latency(us) 00:25:09.314 [2024-10-28T14:20:56.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.314 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:09.314 Verification LBA range: start 0x0 length 0x2000 00:25:09.314 TLSTESTn1 : 10.05 1848.66 7.22 0.00 0.00 69047.94 13786.83 63691.28 00:25:09.314 [2024-10-28T14:20:56.181Z] =================================================================================================================== 00:25:09.314 [2024-10-28T14:20:56.181Z] Total : 1848.66 7.22 0.00 0.00 69047.94 13786.83 63691.28 00:25:09.314 { 00:25:09.314 "results": [ 00:25:09.314 { 00:25:09.314 "job": "TLSTESTn1", 00:25:09.314 "core_mask": "0x4", 00:25:09.314 "workload": "verify", 00:25:09.314 "status": "finished", 00:25:09.314 "verify_range": { 00:25:09.314 "start": 0, 00:25:09.314 "length": 8192 00:25:09.314 }, 00:25:09.314 "queue_depth": 128, 00:25:09.314 "io_size": 4096, 00:25:09.314 "runtime": 10.045644, 00:25:09.314 "iops": 1848.6619673163812, 00:25:09.314 "mibps": 7.221335809829614, 00:25:09.314 "io_failed": 0, 00:25:09.314 "io_timeout": 0, 00:25:09.314 "avg_latency_us": 69047.93608449653, 00:25:09.314 "min_latency_us": 13786.832592592593, 00:25:09.314 "max_latency_us": 63691.28296296296 00:25:09.314 } 00:25:09.314 ], 00:25:09.314 "core_count": 1 00:25:09.314 } 00:25:09.314 15:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:09.314 15:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3218250 00:25:09.314 15:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3218250 ']' 00:25:09.314 15:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3218250 00:25:09.314 15:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:09.314 15:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:09.314 15:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3218250 00:25:09.314 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:09.314 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:09.314 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3218250' 00:25:09.314 killing process with pid 3218250 00:25:09.314 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3218250 00:25:09.314 Received shutdown signal, test time was about 10.000000 seconds 00:25:09.314 00:25:09.314 Latency(us) 00:25:09.314 [2024-10-28T14:20:56.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.314 [2024-10-28T14:20:56.181Z] =================================================================================================================== 00:25:09.314 [2024-10-28T14:20:56.181Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:09.314 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3218250 00:25:09.573 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.5mD0rT7vYi 00:25:09.573 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5mD0rT7vYi 00:25:09.573 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:25:09.573 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5mD0rT7vYi 00:25:09.573 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:25:09.573 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:09.573 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:25:09.573 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:09.573 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5mD0rT7vYi 00:25:09.573 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:09.573 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:09.573 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:09.573 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.5mD0rT7vYi 00:25:09.573 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:09.573 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3219604 00:25:09.573 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:09.573 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:09.573 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3219604 /var/tmp/bdevperf.sock 00:25:09.573 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3219604 ']' 00:25:09.573 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:09.573 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:09.573 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:09.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:09.573 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:09.573 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:09.833 [2024-10-28 15:20:56.477593] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:25:09.833 [2024-10-28 15:20:56.477709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3219604 ] 00:25:09.833 [2024-10-28 15:20:56.590761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.091 [2024-10-28 15:20:56.714330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:10.091 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:10.091 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:10.091 15:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5mD0rT7vYi 00:25:10.660 [2024-10-28 15:20:57.300665] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.5mD0rT7vYi': 0100666 00:25:10.660 [2024-10-28 15:20:57.300766] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:10.660 request: 00:25:10.660 { 00:25:10.660 "name": "key0", 00:25:10.660 "path": "/tmp/tmp.5mD0rT7vYi", 00:25:10.660 "method": "keyring_file_add_key", 00:25:10.660 "req_id": 1 00:25:10.660 } 00:25:10.660 Got JSON-RPC error response 00:25:10.660 response: 00:25:10.660 { 00:25:10.660 "code": -1, 00:25:10.660 "message": "Operation not permitted" 00:25:10.660 } 00:25:10.660 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:11.228 [2024-10-28 15:20:57.838426] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:11.228 [2024-10-28 15:20:57.838556] bdev_nvme.c:6529:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:25:11.228 request: 00:25:11.228 { 00:25:11.228 "name": "TLSTEST", 00:25:11.228 "trtype": "tcp", 00:25:11.228 "traddr": "10.0.0.2", 00:25:11.228 "adrfam": "ipv4", 00:25:11.228 "trsvcid": "4420", 00:25:11.228 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:11.228 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:11.228 "prchk_reftag": false, 00:25:11.228 "prchk_guard": false, 00:25:11.228 "hdgst": false, 00:25:11.228 "ddgst": false, 00:25:11.228 "psk": "key0", 00:25:11.228 "allow_unrecognized_csi": false, 00:25:11.228 "method": "bdev_nvme_attach_controller", 00:25:11.228 "req_id": 1 00:25:11.228 } 00:25:11.228 Got JSON-RPC error response 00:25:11.228 response: 00:25:11.228 { 00:25:11.228 "code": -126, 00:25:11.228 "message": "Required key not available" 00:25:11.228 } 00:25:11.228 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3219604 00:25:11.228 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3219604 ']' 00:25:11.228 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3219604 00:25:11.228 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:11.228 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:11.228 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3219604 00:25:11.228 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:11.228 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:11.228 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3219604' 00:25:11.228 killing process with pid 3219604 00:25:11.228 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3219604 00:25:11.228 Received shutdown signal, test time was about 10.000000 seconds 00:25:11.228 00:25:11.228 Latency(us) 00:25:11.228 [2024-10-28T14:20:58.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.228 [2024-10-28T14:20:58.095Z] =================================================================================================================== 00:25:11.228 [2024-10-28T14:20:58.095Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:11.228 15:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3219604 00:25:11.487 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:11.487 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:25:11.487 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:11.487 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:11.487 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:11.487 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3217868 00:25:11.487 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3217868 ']' 00:25:11.487 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3217868 00:25:11.487 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:11.487 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:11.487 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3217868 00:25:11.487 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:11.487 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:11.487 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3217868' 00:25:11.487 killing process with pid 3217868 00:25:11.487 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3217868 00:25:11.487 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3217868 00:25:12.058 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:25:12.058 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:12.058 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:12.058 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:12.058 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3219884 00:25:12.058 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:12.058 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3219884 00:25:12.058 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3219884 ']' 00:25:12.058 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.058 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:12.058 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.058 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:12.058 15:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:12.058 [2024-10-28 15:20:58.816648] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:25:12.058 [2024-10-28 15:20:58.816780] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:12.318 [2024-10-28 15:20:58.953670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.318 [2024-10-28 15:20:59.067420] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:12.318 [2024-10-28 15:20:59.067529] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:12.318 [2024-10-28 15:20:59.067580] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:12.318 [2024-10-28 15:20:59.067611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:12.318 [2024-10-28 15:20:59.067639] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:12.318 [2024-10-28 15:20:59.069040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:12.579 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:12.579 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:12.579 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:12.579 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:12.579 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:12.579 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:12.579 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.5mD0rT7vYi 00:25:12.579 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:25:12.579 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.5mD0rT7vYi 00:25:12.579 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:25:12.579 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:12.579 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:25:12.579 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:12.579 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.5mD0rT7vYi 00:25:12.579 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.5mD0rT7vYi 00:25:12.579 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:12.840 [2024-10-28 15:20:59.635362] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:12.840 15:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:13.410 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:14.023 [2024-10-28 15:21:00.671576] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:14.023 [2024-10-28 15:21:00.672060] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:14.023 15:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:14.596 malloc0 00:25:14.596 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:15.162 15:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.5mD0rT7vYi 00:25:15.162 [2024-10-28 15:21:01.996804] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.5mD0rT7vYi': 0100666 00:25:15.162 [2024-10-28 15:21:01.996860] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:15.162 request: 00:25:15.162 { 00:25:15.162 "name": "key0", 00:25:15.162 "path": "/tmp/tmp.5mD0rT7vYi", 00:25:15.162 "method": "keyring_file_add_key", 00:25:15.162 "req_id": 1 00:25:15.162 } 00:25:15.162 Got JSON-RPC error response 00:25:15.162 response: 00:25:15.162 { 00:25:15.162 "code": -1, 00:25:15.162 "message": "Operation not permitted" 00:25:15.162 } 00:25:15.162 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:15.421 [2024-10-28 15:21:02.281756] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:25:15.421 [2024-10-28 15:21:02.281820] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:25:15.680 request: 00:25:15.680 { 00:25:15.680 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:15.680 "host": "nqn.2016-06.io.spdk:host1", 00:25:15.680 "psk": "key0", 00:25:15.680 "method": "nvmf_subsystem_add_host", 00:25:15.680 "req_id": 1 00:25:15.680 } 00:25:15.680 Got JSON-RPC error response 00:25:15.680 response: 00:25:15.680 { 00:25:15.680 "code": -32603, 00:25:15.680 "message": "Internal error" 00:25:15.680 } 00:25:15.680 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:25:15.680 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:15.680 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:15.680 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:15.680 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3219884 00:25:15.680 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3219884 ']' 00:25:15.680 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3219884 00:25:15.680 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:15.680 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:15.680 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3219884 00:25:15.680 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:15.680 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:15.680 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3219884' 00:25:15.680 killing process with pid 3219884 00:25:15.680 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3219884 00:25:15.680 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3219884 00:25:15.940 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.5mD0rT7vYi 00:25:15.940 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:25:15.940 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:15.940 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:15.940 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:15.940 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3220432 00:25:15.940 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:15.940 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3220432 00:25:15.940 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3220432 ']' 00:25:15.940 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.940 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:15.940 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.940 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:15.940 15:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:16.201 [2024-10-28 15:21:02.824755] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:25:16.202 [2024-10-28 15:21:02.824867] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:16.202 [2024-10-28 15:21:02.946466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.202 [2024-10-28 15:21:03.059220] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:16.202 [2024-10-28 15:21:03.059333] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:16.202 [2024-10-28 15:21:03.059369] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:16.202 [2024-10-28 15:21:03.059399] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:16.202 [2024-10-28 15:21:03.059424] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:16.202 [2024-10-28 15:21:03.060771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:16.461 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:16.461 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:16.461 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:16.461 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:16.461 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:16.461 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:16.461 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.5mD0rT7vYi 00:25:16.461 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.5mD0rT7vYi 00:25:16.461 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:17.031 [2024-10-28 15:21:03.594888] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:17.031 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:17.291 15:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:17.862 [2024-10-28 15:21:04.570942] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:17.862 [2024-10-28 15:21:04.571433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:17.862 15:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:18.432 malloc0 00:25:18.432 15:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:18.692 15:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.5mD0rT7vYi 00:25:19.264 15:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:19.833 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3220926 00:25:19.833 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:19.833 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:19.833 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3220926 /var/tmp/bdevperf.sock 00:25:19.833 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3220926 ']' 00:25:19.833 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:19.833 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:19.833 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:19.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:19.833 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:19.833 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:19.833 [2024-10-28 15:21:06.538942] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:25:19.833 [2024-10-28 15:21:06.539034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3220926 ] 00:25:19.833 [2024-10-28 15:21:06.670487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.091 [2024-10-28 15:21:06.789995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:20.091 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:20.091 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:20.091 15:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5mD0rT7vYi 00:25:20.656 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:21.226 [2024-10-28 15:21:07.836212] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:21.226 TLSTESTn1 00:25:21.226 15:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:25:21.798 15:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:25:21.798 "subsystems": [ 00:25:21.798 { 00:25:21.798 "subsystem": "keyring", 00:25:21.798 "config": [ 00:25:21.798 { 00:25:21.798 "method": "keyring_file_add_key", 00:25:21.798 "params": { 00:25:21.798 "name": "key0", 00:25:21.798 "path": "/tmp/tmp.5mD0rT7vYi" 00:25:21.798 } 00:25:21.798 } 00:25:21.798 ] 00:25:21.798 }, 00:25:21.798 { 00:25:21.798 "subsystem": "iobuf", 00:25:21.798 "config": [ 00:25:21.798 { 00:25:21.798 "method": "iobuf_set_options", 00:25:21.798 "params": { 00:25:21.798 "small_pool_count": 8192, 00:25:21.798 "large_pool_count": 1024, 00:25:21.798 "small_bufsize": 8192, 00:25:21.798 "large_bufsize": 135168, 00:25:21.798 "enable_numa": false 00:25:21.798 } 00:25:21.798 } 00:25:21.798 ] 00:25:21.798 }, 00:25:21.798 { 00:25:21.798 "subsystem": "sock", 00:25:21.798 "config": [ 00:25:21.798 { 00:25:21.798 "method": "sock_set_default_impl", 00:25:21.798 "params": { 00:25:21.798 "impl_name": "posix" 00:25:21.798 } 00:25:21.798 }, 00:25:21.798 { 00:25:21.798 "method": "sock_impl_set_options", 00:25:21.798 "params": { 00:25:21.798 "impl_name": "ssl", 00:25:21.798 "recv_buf_size": 4096, 00:25:21.798 "send_buf_size": 4096, 00:25:21.798 "enable_recv_pipe": true, 00:25:21.798 "enable_quickack": false, 00:25:21.798 "enable_placement_id": 0, 00:25:21.798 "enable_zerocopy_send_server": true, 00:25:21.798 "enable_zerocopy_send_client": false, 00:25:21.798 "zerocopy_threshold": 0, 00:25:21.798 "tls_version": 0, 00:25:21.798 "enable_ktls": false 00:25:21.798 } 00:25:21.798 }, 00:25:21.798 { 00:25:21.798 "method": "sock_impl_set_options", 00:25:21.798 "params": { 00:25:21.798 "impl_name": "posix", 00:25:21.799 "recv_buf_size": 2097152, 00:25:21.799 "send_buf_size": 2097152, 00:25:21.799 "enable_recv_pipe": true, 00:25:21.799 "enable_quickack": false, 00:25:21.799 "enable_placement_id": 0, 00:25:21.799 "enable_zerocopy_send_server": true, 00:25:21.799 "enable_zerocopy_send_client": false, 00:25:21.799 "zerocopy_threshold": 0, 00:25:21.799 "tls_version": 0, 00:25:21.799 "enable_ktls": false 00:25:21.799 } 00:25:21.799 } 00:25:21.799 ] 00:25:21.799 }, 00:25:21.799 { 00:25:21.799 "subsystem": "vmd", 00:25:21.799 "config": [] 00:25:21.799 }, 00:25:21.799 { 00:25:21.799 "subsystem": "accel", 00:25:21.799 "config": [ 00:25:21.799 { 00:25:21.799 "method": "accel_set_options", 00:25:21.799 "params": { 00:25:21.799 "small_cache_size": 128, 00:25:21.799 "large_cache_size": 16, 00:25:21.799 "task_count": 2048, 00:25:21.799 "sequence_count": 2048, 00:25:21.799 "buf_count": 2048 00:25:21.799 } 00:25:21.799 } 00:25:21.799 ] 00:25:21.799 }, 00:25:21.799 { 00:25:21.799 "subsystem": "bdev", 00:25:21.799 "config": [ 00:25:21.799 { 00:25:21.799 "method": "bdev_set_options", 00:25:21.799 "params": { 00:25:21.799 "bdev_io_pool_size": 65535, 00:25:21.799 "bdev_io_cache_size": 256, 00:25:21.799 "bdev_auto_examine": true, 00:25:21.799 "iobuf_small_cache_size": 128, 00:25:21.799 "iobuf_large_cache_size": 16 00:25:21.799 } 00:25:21.799 }, 00:25:21.799 { 00:25:21.799 "method": "bdev_raid_set_options", 00:25:21.799 "params": { 00:25:21.799 "process_window_size_kb": 1024, 00:25:21.799 "process_max_bandwidth_mb_sec": 0 00:25:21.799 } 00:25:21.799 }, 00:25:21.799 { 00:25:21.799 "method": "bdev_iscsi_set_options", 00:25:21.799 "params": { 00:25:21.799 "timeout_sec": 30 00:25:21.799 } 00:25:21.799 }, 00:25:21.799 { 00:25:21.799 "method": "bdev_nvme_set_options", 00:25:21.799 "params": { 00:25:21.799 "action_on_timeout": "none", 00:25:21.799 "timeout_us": 0, 00:25:21.799 "timeout_admin_us": 0, 00:25:21.799 "keep_alive_timeout_ms": 10000, 00:25:21.799 "arbitration_burst": 0, 00:25:21.799 "low_priority_weight": 0, 00:25:21.799 "medium_priority_weight": 0, 00:25:21.799 "high_priority_weight": 0, 00:25:21.799 "nvme_adminq_poll_period_us": 10000, 00:25:21.799 "nvme_ioq_poll_period_us": 0, 00:25:21.799 "io_queue_requests": 0, 00:25:21.799 "delay_cmd_submit": true, 00:25:21.799 "transport_retry_count": 4, 00:25:21.799 "bdev_retry_count": 3, 00:25:21.799 "transport_ack_timeout": 0, 00:25:21.799 "ctrlr_loss_timeout_sec": 0, 00:25:21.799 "reconnect_delay_sec": 0, 00:25:21.799 "fast_io_fail_timeout_sec": 0, 00:25:21.799 "disable_auto_failback": false, 00:25:21.799 "generate_uuids": false, 00:25:21.799 "transport_tos": 0, 00:25:21.799 "nvme_error_stat": false, 00:25:21.799 "rdma_srq_size": 0, 00:25:21.799 "io_path_stat": false, 00:25:21.799 "allow_accel_sequence": false, 00:25:21.799 "rdma_max_cq_size": 0, 00:25:21.799 "rdma_cm_event_timeout_ms": 0, 00:25:21.799 "dhchap_digests": [ 00:25:21.799 "sha256", 00:25:21.799 "sha384", 00:25:21.799 "sha512" 00:25:21.799 ], 00:25:21.799 "dhchap_dhgroups": [ 00:25:21.799 "null", 00:25:21.799 "ffdhe2048", 00:25:21.799 "ffdhe3072", 00:25:21.799 "ffdhe4096", 00:25:21.799 "ffdhe6144", 00:25:21.799 "ffdhe8192" 00:25:21.799 ] 00:25:21.799 } 00:25:21.799 }, 00:25:21.799 { 00:25:21.799 "method": "bdev_nvme_set_hotplug", 00:25:21.799 "params": { 00:25:21.799 "period_us": 100000, 00:25:21.799 "enable": false 00:25:21.799 } 00:25:21.799 }, 00:25:21.799 { 00:25:21.799 "method": "bdev_malloc_create", 00:25:21.799 "params": { 00:25:21.799 "name": "malloc0", 00:25:21.799 "num_blocks": 8192, 00:25:21.799 "block_size": 4096, 00:25:21.799 "physical_block_size": 4096, 00:25:21.799 "uuid": "c9ead98b-295a-4ff5-8575-7a05f24f52e9", 00:25:21.799 "optimal_io_boundary": 0, 00:25:21.799 "md_size": 0, 00:25:21.799 "dif_type": 0, 00:25:21.799 "dif_is_head_of_md": false, 00:25:21.799 "dif_pi_format": 0 00:25:21.799 } 00:25:21.799 }, 00:25:21.799 { 00:25:21.799 "method": "bdev_wait_for_examine" 00:25:21.799 } 00:25:21.799 ] 00:25:21.799 }, 00:25:21.799 { 00:25:21.799 "subsystem": "nbd", 00:25:21.799 "config": [] 00:25:21.799 }, 00:25:21.799 { 00:25:21.799 "subsystem": "scheduler", 00:25:21.799 "config": [ 00:25:21.799 { 00:25:21.799 "method": "framework_set_scheduler", 00:25:21.799 "params": { 00:25:21.799 "name": "static" 00:25:21.799 } 00:25:21.799 } 00:25:21.799 ] 00:25:21.799 }, 00:25:21.799 { 00:25:21.799 "subsystem": "nvmf", 00:25:21.799 "config": [ 00:25:21.799 { 00:25:21.799 "method": "nvmf_set_config", 00:25:21.799 "params": { 00:25:21.799 "discovery_filter": "match_any", 00:25:21.799 "admin_cmd_passthru": { 00:25:21.799 "identify_ctrlr": false 00:25:21.799 }, 00:25:21.799 "dhchap_digests": [ 00:25:21.799 "sha256", 00:25:21.799 "sha384", 00:25:21.799 "sha512" 00:25:21.799 ], 00:25:21.799 "dhchap_dhgroups": [ 00:25:21.799 "null", 00:25:21.799 "ffdhe2048", 00:25:21.799 "ffdhe3072", 00:25:21.799 "ffdhe4096", 00:25:21.799 "ffdhe6144", 00:25:21.799 "ffdhe8192" 00:25:21.799 ] 00:25:21.799 } 00:25:21.799 }, 00:25:21.799 { 00:25:21.799 "method": "nvmf_set_max_subsystems", 00:25:21.799 "params": { 00:25:21.799 "max_subsystems": 1024 00:25:21.799 } 00:25:21.799 }, 00:25:21.799 { 00:25:21.799 "method": "nvmf_set_crdt", 00:25:21.799 "params": { 00:25:21.799 "crdt1": 0, 00:25:21.799 "crdt2": 0, 00:25:21.799 "crdt3": 0 00:25:21.799 } 00:25:21.799 }, 00:25:21.799 { 00:25:21.799 "method": "nvmf_create_transport", 00:25:21.799 "params": { 00:25:21.799 "trtype": "TCP", 00:25:21.799 "max_queue_depth": 128, 00:25:21.799 "max_io_qpairs_per_ctrlr": 127, 00:25:21.799 "in_capsule_data_size": 4096, 00:25:21.799 "max_io_size": 131072, 00:25:21.799 "io_unit_size": 131072, 00:25:21.799 "max_aq_depth": 128, 00:25:21.799 "num_shared_buffers": 511, 00:25:21.799 "buf_cache_size": 4294967295, 00:25:21.799 "dif_insert_or_strip": false, 00:25:21.799 "zcopy": false, 00:25:21.799 "c2h_success": false, 00:25:21.799 "sock_priority": 0, 00:25:21.799 "abort_timeout_sec": 1, 00:25:21.799 "ack_timeout": 0, 00:25:21.799 "data_wr_pool_size": 0 00:25:21.799 } 00:25:21.799 }, 00:25:21.799 { 00:25:21.799 "method": "nvmf_create_subsystem", 00:25:21.799 "params": { 00:25:21.799 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:21.799 "allow_any_host": false, 00:25:21.799 "serial_number": "SPDK00000000000001", 00:25:21.799 "model_number": "SPDK bdev Controller", 00:25:21.799 "max_namespaces": 10, 00:25:21.799 "min_cntlid": 1, 00:25:21.799 "max_cntlid": 65519, 00:25:21.799 "ana_reporting": false 00:25:21.799 } 00:25:21.799 }, 00:25:21.799 { 00:25:21.799 "method": "nvmf_subsystem_add_host", 00:25:21.799 "params": { 00:25:21.799 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:21.799 "host": "nqn.2016-06.io.spdk:host1", 00:25:21.799 "psk": "key0" 00:25:21.799 } 00:25:21.799 }, 00:25:21.799 { 00:25:21.799 "method": "nvmf_subsystem_add_ns", 00:25:21.799 "params": { 00:25:21.799 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:21.799 "namespace": { 00:25:21.799 "nsid": 1, 00:25:21.799 "bdev_name": "malloc0", 00:25:21.799 "nguid": "C9EAD98B295A4FF585757A05F24F52E9", 00:25:21.799 "uuid": "c9ead98b-295a-4ff5-8575-7a05f24f52e9", 00:25:21.799 "no_auto_visible": false 00:25:21.799 } 00:25:21.799 } 00:25:21.799 }, 00:25:21.799 { 00:25:21.799 "method": "nvmf_subsystem_add_listener", 00:25:21.799 "params": { 00:25:21.799 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:21.799 "listen_address": { 00:25:21.799 "trtype": "TCP", 00:25:21.799 "adrfam": "IPv4", 00:25:21.799 "traddr": "10.0.0.2", 00:25:21.799 "trsvcid": "4420" 00:25:21.799 }, 00:25:21.799 "secure_channel": true 00:25:21.799 } 00:25:21.799 } 00:25:21.800 ] 00:25:21.800 } 00:25:21.800 ] 00:25:21.800 }' 00:25:21.800 15:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:22.370 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:25:22.370 "subsystems": [ 00:25:22.370 { 00:25:22.370 "subsystem": "keyring", 00:25:22.370 "config": [ 00:25:22.370 { 00:25:22.370 "method": "keyring_file_add_key", 00:25:22.370 "params": { 00:25:22.370 "name": "key0", 00:25:22.370 "path": "/tmp/tmp.5mD0rT7vYi" 00:25:22.370 } 00:25:22.370 } 00:25:22.370 ] 00:25:22.370 }, 00:25:22.370 { 00:25:22.370 "subsystem": "iobuf", 00:25:22.370 "config": [ 00:25:22.370 { 00:25:22.370 "method": "iobuf_set_options", 00:25:22.370 "params": { 00:25:22.370 "small_pool_count": 8192, 00:25:22.370 "large_pool_count": 1024, 00:25:22.370 "small_bufsize": 8192, 00:25:22.370 "large_bufsize": 135168, 00:25:22.370 "enable_numa": false 00:25:22.370 } 00:25:22.370 } 00:25:22.370 ] 00:25:22.370 }, 00:25:22.370 { 00:25:22.370 "subsystem": "sock", 00:25:22.370 "config": [ 00:25:22.370 { 00:25:22.370 "method": "sock_set_default_impl", 00:25:22.370 "params": { 00:25:22.370 "impl_name": "posix" 00:25:22.370 } 00:25:22.370 }, 00:25:22.370 { 00:25:22.370 "method": "sock_impl_set_options", 00:25:22.370 "params": { 00:25:22.370 "impl_name": "ssl", 00:25:22.370 "recv_buf_size": 4096, 00:25:22.370 "send_buf_size": 4096, 00:25:22.370 "enable_recv_pipe": true, 00:25:22.370 "enable_quickack": false, 00:25:22.371 "enable_placement_id": 0, 00:25:22.371 "enable_zerocopy_send_server": true, 00:25:22.371 "enable_zerocopy_send_client": false, 00:25:22.371 "zerocopy_threshold": 0, 00:25:22.371 "tls_version": 0, 00:25:22.371 "enable_ktls": false 00:25:22.371 } 00:25:22.371 }, 00:25:22.371 { 00:25:22.371 "method": "sock_impl_set_options", 00:25:22.371 "params": { 00:25:22.371 "impl_name": "posix", 00:25:22.371 "recv_buf_size": 2097152, 00:25:22.371 "send_buf_size": 2097152, 00:25:22.371 "enable_recv_pipe": true, 00:25:22.371 "enable_quickack": false, 00:25:22.371 "enable_placement_id": 0, 00:25:22.371 "enable_zerocopy_send_server": true, 00:25:22.371 "enable_zerocopy_send_client": false, 00:25:22.371 "zerocopy_threshold": 0, 00:25:22.371 "tls_version": 0, 00:25:22.371 "enable_ktls": false 00:25:22.371 } 00:25:22.371 } 00:25:22.371 ] 00:25:22.371 }, 00:25:22.371 { 00:25:22.371 "subsystem": "vmd", 00:25:22.371 "config": [] 00:25:22.371 }, 00:25:22.371 { 00:25:22.371 "subsystem": "accel", 00:25:22.371 "config": [ 00:25:22.371 { 00:25:22.371 "method": "accel_set_options", 00:25:22.371 "params": { 00:25:22.371 "small_cache_size": 128, 00:25:22.371 "large_cache_size": 16, 00:25:22.371 "task_count": 2048, 00:25:22.371 "sequence_count": 2048, 00:25:22.371 "buf_count": 2048 00:25:22.371 } 00:25:22.371 } 00:25:22.371 ] 00:25:22.371 }, 00:25:22.371 { 00:25:22.371 "subsystem": "bdev", 00:25:22.371 "config": [ 00:25:22.371 { 00:25:22.371 "method": "bdev_set_options", 00:25:22.371 "params": { 00:25:22.371 "bdev_io_pool_size": 65535, 00:25:22.371 "bdev_io_cache_size": 256, 00:25:22.371 "bdev_auto_examine": true, 00:25:22.371 "iobuf_small_cache_size": 128, 00:25:22.371 "iobuf_large_cache_size": 16 00:25:22.371 } 00:25:22.371 }, 00:25:22.371 { 00:25:22.371 "method": "bdev_raid_set_options", 00:25:22.371 "params": { 00:25:22.371 "process_window_size_kb": 1024, 00:25:22.371 "process_max_bandwidth_mb_sec": 0 00:25:22.371 } 00:25:22.371 }, 00:25:22.371 { 00:25:22.371 "method": "bdev_iscsi_set_options", 00:25:22.371 "params": { 00:25:22.371 "timeout_sec": 30 00:25:22.371 } 00:25:22.371 }, 00:25:22.371 { 00:25:22.371 "method": "bdev_nvme_set_options", 00:25:22.371 "params": { 00:25:22.371 "action_on_timeout": "none", 00:25:22.371 "timeout_us": 0, 00:25:22.371 "timeout_admin_us": 0, 00:25:22.371 "keep_alive_timeout_ms": 10000, 00:25:22.371 "arbitration_burst": 0, 00:25:22.371 "low_priority_weight": 0, 00:25:22.371 "medium_priority_weight": 0, 00:25:22.371 "high_priority_weight": 0, 00:25:22.371 "nvme_adminq_poll_period_us": 10000, 00:25:22.371 "nvme_ioq_poll_period_us": 0, 00:25:22.371 "io_queue_requests": 512, 00:25:22.371 "delay_cmd_submit": true, 00:25:22.371 "transport_retry_count": 4, 00:25:22.371 "bdev_retry_count": 3, 00:25:22.371 "transport_ack_timeout": 0, 00:25:22.371 "ctrlr_loss_timeout_sec": 0, 00:25:22.371 "reconnect_delay_sec": 0, 00:25:22.371 "fast_io_fail_timeout_sec": 0, 00:25:22.371 "disable_auto_failback": false, 00:25:22.371 "generate_uuids": false, 00:25:22.371 "transport_tos": 0, 00:25:22.371 "nvme_error_stat": false, 00:25:22.371 "rdma_srq_size": 0, 00:25:22.371 "io_path_stat": false, 00:25:22.371 "allow_accel_sequence": false, 00:25:22.371 "rdma_max_cq_size": 0, 00:25:22.371 "rdma_cm_event_timeout_ms": 0, 00:25:22.371 "dhchap_digests": [ 00:25:22.371 "sha256", 00:25:22.371 "sha384", 00:25:22.371 "sha512" 00:25:22.371 ], 00:25:22.371 "dhchap_dhgroups": [ 00:25:22.371 "null", 00:25:22.371 "ffdhe2048", 00:25:22.371 "ffdhe3072", 00:25:22.371 "ffdhe4096", 00:25:22.371 "ffdhe6144", 00:25:22.371 "ffdhe8192" 00:25:22.371 ] 00:25:22.371 } 00:25:22.371 }, 00:25:22.371 { 00:25:22.371 "method": "bdev_nvme_attach_controller", 00:25:22.371 "params": { 00:25:22.371 "name": "TLSTEST", 00:25:22.371 "trtype": "TCP", 00:25:22.371 "adrfam": "IPv4", 00:25:22.371 "traddr": "10.0.0.2", 00:25:22.371 "trsvcid": "4420", 00:25:22.371 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:22.371 "prchk_reftag": false, 00:25:22.371 "prchk_guard": false, 00:25:22.371 "ctrlr_loss_timeout_sec": 0, 00:25:22.371 "reconnect_delay_sec": 0, 00:25:22.371 "fast_io_fail_timeout_sec": 0, 00:25:22.371 "psk": "key0", 00:25:22.371 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:22.371 "hdgst": false, 00:25:22.371 "ddgst": false, 00:25:22.371 "multipath": "multipath" 00:25:22.371 } 00:25:22.371 }, 00:25:22.371 { 00:25:22.371 "method": "bdev_nvme_set_hotplug", 00:25:22.371 "params": { 00:25:22.371 "period_us": 100000, 00:25:22.371 "enable": false 00:25:22.371 } 00:25:22.371 }, 00:25:22.372 { 00:25:22.372 "method": "bdev_wait_for_examine" 00:25:22.372 } 00:25:22.372 ] 00:25:22.372 }, 00:25:22.372 { 00:25:22.372 "subsystem": "nbd", 00:25:22.372 "config": [] 00:25:22.372 } 00:25:22.372 ] 00:25:22.372 }' 00:25:22.372 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3220926 00:25:22.372 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3220926 ']' 00:25:22.372 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3220926 00:25:22.372 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:22.372 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:22.372 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3220926 00:25:22.372 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:22.372 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:22.372 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3220926' 00:25:22.372 killing process with pid 3220926 00:25:22.372 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3220926 00:25:22.372 Received shutdown signal, test time was about 10.000000 seconds 00:25:22.372 00:25:22.372 Latency(us) 00:25:22.372 [2024-10-28T14:21:09.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:22.372 [2024-10-28T14:21:09.239Z] =================================================================================================================== 00:25:22.372 [2024-10-28T14:21:09.239Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:22.372 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3220926 00:25:22.938 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3220432 00:25:22.938 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3220432 ']' 00:25:22.938 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3220432 00:25:22.938 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:22.938 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:22.938 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3220432 00:25:22.938 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:22.938 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:22.938 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3220432' 00:25:22.938 killing process with pid 3220432 00:25:22.938 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3220432 00:25:22.938 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3220432 00:25:23.197 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:25:23.197 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:23.197 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:23.197 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:25:23.197 "subsystems": [ 00:25:23.197 { 00:25:23.197 "subsystem": "keyring", 00:25:23.197 "config": [ 00:25:23.197 { 00:25:23.197 "method": "keyring_file_add_key", 00:25:23.197 "params": { 00:25:23.197 "name": "key0", 00:25:23.197 "path": "/tmp/tmp.5mD0rT7vYi" 00:25:23.197 } 00:25:23.197 } 00:25:23.197 ] 00:25:23.197 }, 00:25:23.197 { 00:25:23.197 "subsystem": "iobuf", 00:25:23.197 "config": [ 00:25:23.197 { 00:25:23.197 "method": "iobuf_set_options", 00:25:23.197 "params": { 00:25:23.197 "small_pool_count": 8192, 00:25:23.197 "large_pool_count": 1024, 00:25:23.197 "small_bufsize": 8192, 00:25:23.197 "large_bufsize": 135168, 00:25:23.197 "enable_numa": false 00:25:23.197 } 00:25:23.197 } 00:25:23.197 ] 00:25:23.197 }, 00:25:23.197 { 00:25:23.197 "subsystem": "sock", 00:25:23.197 "config": [ 00:25:23.197 { 00:25:23.197 "method": "sock_set_default_impl", 00:25:23.197 "params": { 00:25:23.197 "impl_name": "posix" 00:25:23.197 } 00:25:23.197 }, 00:25:23.197 { 00:25:23.197 "method": "sock_impl_set_options", 00:25:23.197 "params": { 00:25:23.197 "impl_name": "ssl", 00:25:23.197 "recv_buf_size": 4096, 00:25:23.197 "send_buf_size": 4096, 00:25:23.197 "enable_recv_pipe": true, 00:25:23.197 "enable_quickack": false, 00:25:23.197 "enable_placement_id": 0, 00:25:23.197 "enable_zerocopy_send_server": true, 00:25:23.197 "enable_zerocopy_send_client": false, 00:25:23.197 "zerocopy_threshold": 0, 00:25:23.197 "tls_version": 0, 00:25:23.197 "enable_ktls": false 00:25:23.197 } 00:25:23.197 }, 00:25:23.197 { 00:25:23.197 "method": "sock_impl_set_options", 00:25:23.197 "params": { 00:25:23.197 "impl_name": "posix", 00:25:23.197 "recv_buf_size": 2097152, 00:25:23.197 "send_buf_size": 2097152, 00:25:23.197 "enable_recv_pipe": true, 00:25:23.197 "enable_quickack": false, 00:25:23.197 "enable_placement_id": 0, 00:25:23.197 "enable_zerocopy_send_server": true, 00:25:23.197 "enable_zerocopy_send_client": false, 00:25:23.197 "zerocopy_threshold": 0, 00:25:23.197 "tls_version": 0, 00:25:23.197 "enable_ktls": false 00:25:23.197 } 00:25:23.197 } 00:25:23.197 ] 00:25:23.197 }, 00:25:23.197 { 00:25:23.197 "subsystem": "vmd", 00:25:23.197 "config": [] 00:25:23.197 }, 00:25:23.197 { 00:25:23.197 "subsystem": "accel", 00:25:23.197 "config": [ 00:25:23.197 { 00:25:23.197 "method": "accel_set_options", 00:25:23.197 "params": { 00:25:23.197 "small_cache_size": 128, 00:25:23.197 "large_cache_size": 16, 00:25:23.197 "task_count": 2048, 00:25:23.197 "sequence_count": 2048, 00:25:23.197 "buf_count": 2048 00:25:23.197 } 00:25:23.197 } 00:25:23.197 ] 00:25:23.197 }, 00:25:23.197 { 00:25:23.197 "subsystem": "bdev", 00:25:23.197 "config": [ 00:25:23.197 { 00:25:23.197 "method": "bdev_set_options", 00:25:23.197 "params": { 00:25:23.197 "bdev_io_pool_size": 65535, 00:25:23.197 "bdev_io_cache_size": 256, 00:25:23.197 "bdev_auto_examine": true, 00:25:23.197 "iobuf_small_cache_size": 128, 00:25:23.197 "iobuf_large_cache_size": 16 00:25:23.197 } 00:25:23.197 }, 00:25:23.197 { 00:25:23.197 "method": "bdev_raid_set_options", 00:25:23.197 "params": { 00:25:23.197 "process_window_size_kb": 1024, 00:25:23.197 "process_max_bandwidth_mb_sec": 0 00:25:23.197 } 00:25:23.197 }, 00:25:23.197 { 00:25:23.197 "method": "bdev_iscsi_set_options", 00:25:23.197 "params": { 00:25:23.197 "timeout_sec": 30 00:25:23.197 } 00:25:23.197 }, 00:25:23.197 { 00:25:23.197 "method": "bdev_nvme_set_options", 00:25:23.197 "params": { 00:25:23.197 "action_on_timeout": "none", 00:25:23.197 "timeout_us": 0, 00:25:23.197 "timeout_admin_us": 0, 00:25:23.197 "keep_alive_timeout_ms": 10000, 00:25:23.197 "arbitration_burst": 0, 00:25:23.197 "low_priority_weight": 0, 00:25:23.197 "medium_priority_weight": 0, 00:25:23.197 "high_priority_weight": 0, 00:25:23.197 "nvme_adminq_poll_period_us": 10000, 00:25:23.197 "nvme_ioq_poll_period_us": 0, 00:25:23.197 "io_queue_requests": 0, 00:25:23.197 "delay_cmd_submit": true, 00:25:23.197 "transport_retry_count": 4, 00:25:23.197 "bdev_retry_count": 3, 00:25:23.197 "transport_ack_timeout": 0, 00:25:23.197 "ctrlr_loss_timeout_sec": 0, 00:25:23.197 "reconnect_delay_sec": 0, 00:25:23.197 "fast_io_fail_timeout_sec": 0, 00:25:23.197 "disable_auto_failback": false, 00:25:23.197 "generate_uuids": false, 00:25:23.197 "transport_tos": 0, 00:25:23.197 "nvme_error_stat": false, 00:25:23.197 "rdma_srq_size": 0, 00:25:23.197 "io_path_stat": false, 00:25:23.197 "allow_accel_sequence": false, 00:25:23.197 "rdma_max_cq_size": 0, 00:25:23.197 "rdma_cm_event_timeout_ms": 0, 00:25:23.197 "dhchap_digests": [ 00:25:23.197 "sha256", 00:25:23.197 "sha384", 00:25:23.197 "sha512" 00:25:23.197 ], 00:25:23.197 "dhchap_dhgroups": [ 00:25:23.197 "null", 00:25:23.197 "ffdhe2048", 00:25:23.197 "ffdhe3072", 00:25:23.197 "ffdhe4096", 00:25:23.197 "ffdhe6144", 00:25:23.197 "ffdhe8192" 00:25:23.197 ] 00:25:23.197 } 00:25:23.197 }, 00:25:23.197 { 00:25:23.197 "method": "bdev_nvme_set_hotplug", 00:25:23.197 "params": { 00:25:23.197 "period_us": 100000, 00:25:23.197 "enable": false 00:25:23.197 } 00:25:23.197 }, 00:25:23.197 { 00:25:23.197 "method": "bdev_malloc_create", 00:25:23.197 "params": { 00:25:23.197 "name": "malloc0", 00:25:23.197 "num_blocks": 8192, 00:25:23.197 "block_size": 4096, 00:25:23.197 "physical_block_size": 4096, 00:25:23.197 "uuid": "c9ead98b-295a-4ff5-8575-7a05f24f52e9", 00:25:23.197 "optimal_io_boundary": 0, 00:25:23.197 "md_size": 0, 00:25:23.197 "dif_type": 0, 00:25:23.197 "dif_is_head_of_md": false, 00:25:23.197 "dif_pi_format": 0 00:25:23.197 } 00:25:23.197 }, 00:25:23.197 { 00:25:23.197 "method": "bdev_wait_for_examine" 00:25:23.197 } 00:25:23.197 ] 00:25:23.197 }, 00:25:23.197 { 00:25:23.197 "subsystem": "nbd", 00:25:23.197 "config": [] 00:25:23.197 }, 00:25:23.197 { 00:25:23.197 "subsystem": "scheduler", 00:25:23.197 "config": [ 00:25:23.197 { 00:25:23.197 "method": "framework_set_scheduler", 00:25:23.197 "params": { 00:25:23.197 "name": "static" 00:25:23.197 } 00:25:23.197 } 00:25:23.197 ] 00:25:23.197 }, 00:25:23.197 { 00:25:23.197 "subsystem": "nvmf", 00:25:23.197 "config": [ 00:25:23.197 { 00:25:23.197 "method": "nvmf_set_config", 00:25:23.197 "params": { 00:25:23.197 "discovery_filter": "match_any", 00:25:23.197 "admin_cmd_passthru": { 00:25:23.198 "identify_ctrlr": false 00:25:23.198 }, 00:25:23.198 "dhchap_digests": [ 00:25:23.198 "sha256", 00:25:23.198 "sha384", 00:25:23.198 "sha512" 00:25:23.198 ], 00:25:23.198 "dhchap_dhgroups": [ 00:25:23.198 "null", 00:25:23.198 "ffdhe2048", 00:25:23.198 "ffdhe3072", 00:25:23.198 "ffdhe4096", 00:25:23.198 "ffdhe6144", 00:25:23.198 "ffdhe8192" 00:25:23.198 ] 00:25:23.198 } 00:25:23.198 }, 00:25:23.198 { 00:25:23.198 "method": "nvmf_set_max_subsystems", 00:25:23.198 "params": { 00:25:23.198 "max_subsystems": 1024 00:25:23.198 } 00:25:23.198 }, 00:25:23.198 { 00:25:23.198 "method": "nvmf_set_crdt", 00:25:23.198 "params": { 00:25:23.198 "crdt1": 0, 00:25:23.198 "crdt2": 0, 00:25:23.198 "crdt3": 0 00:25:23.198 } 00:25:23.198 }, 00:25:23.198 { 00:25:23.198 "method": "nvmf_create_transport", 00:25:23.198 "params": { 00:25:23.198 "trtype": "TCP", 00:25:23.198 "max_queue_depth": 128, 00:25:23.198 "max_io_qpairs_per_ctrlr": 127, 00:25:23.198 "in_capsule_data_size": 4096, 00:25:23.198 "max_io_size": 131072, 00:25:23.198 "io_unit_size": 131072, 00:25:23.198 "max_aq_depth": 128, 00:25:23.198 "num_shared_buffers": 511, 00:25:23.198 "buf_cache_size": 4294967295, 00:25:23.198 "dif_insert_or_strip": false, 00:25:23.198 "zcopy": false, 00:25:23.198 "c2h_success": false, 00:25:23.198 "sock_priority": 0, 00:25:23.198 "abort_timeout_sec": 1, 00:25:23.198 "ack_timeout": 0, 00:25:23.198 "data_wr_pool_size": 0 00:25:23.198 } 00:25:23.198 }, 00:25:23.198 { 00:25:23.198 "method": "nvmf_create_subsystem", 00:25:23.198 "params": { 00:25:23.198 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:23.198 "allow_any_host": false, 00:25:23.198 "serial_number": "SPDK00000000000001", 00:25:23.198 "model_number": "SPDK bdev Controller", 00:25:23.198 "max_namespaces": 10, 00:25:23.198 "min_cntlid": 1, 00:25:23.198 "max_cntlid": 65519, 00:25:23.198 "ana_reporting": false 00:25:23.198 } 00:25:23.198 }, 00:25:23.198 { 00:25:23.198 "method": "nvmf_subsystem_add_host", 00:25:23.198 "params": { 00:25:23.198 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:23.198 "host": "nqn.2016-06.io.spdk:host1", 00:25:23.198 "psk": "key0" 00:25:23.198 } 00:25:23.198 }, 00:25:23.198 { 00:25:23.198 "method": "nvmf_subsystem_add_ns", 00:25:23.198 "params": { 00:25:23.198 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:23.198 "namespace": { 00:25:23.198 "nsid": 1, 00:25:23.198 "bdev_name": "malloc0", 00:25:23.198 "nguid": "C9EAD98B295A4FF585757A05F24F52E9", 00:25:23.198 "uuid": "c9ead98b-295a-4ff5-8575-7a05f24f52e9", 00:25:23.198 "no_auto_visible": false 00:25:23.198 } 00:25:23.198 } 00:25:23.198 }, 00:25:23.198 { 00:25:23.198 "method": "nvmf_subsystem_add_listener", 00:25:23.198 "params": { 00:25:23.198 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:23.198 "listen_address": { 00:25:23.198 "trtype": "TCP", 00:25:23.198 "adrfam": "IPv4", 00:25:23.198 "traddr": "10.0.0.2", 00:25:23.198 "trsvcid": "4420" 00:25:23.198 }, 00:25:23.198 "secure_channel": true 00:25:23.198 } 00:25:23.198 } 00:25:23.198 ] 00:25:23.198 } 00:25:23.198 ] 00:25:23.198 }' 00:25:23.198 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:23.198 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3221746 00:25:23.198 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:25:23.198 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3221746 00:25:23.198 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3221746 ']' 00:25:23.198 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.198 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:23.198 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.198 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:23.198 15:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:23.198 [2024-10-28 15:21:10.050930] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:25:23.198 [2024-10-28 15:21:10.051075] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:23.458 [2024-10-28 15:21:10.184784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.458 [2024-10-28 15:21:10.294104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:23.458 [2024-10-28 15:21:10.294213] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:23.458 [2024-10-28 15:21:10.294248] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:23.458 [2024-10-28 15:21:10.294279] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:23.458 [2024-10-28 15:21:10.294305] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:23.458 [2024-10-28 15:21:10.295757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:24.029 [2024-10-28 15:21:10.628406] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:24.029 [2024-10-28 15:21:10.660865] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:24.029 [2024-10-28 15:21:10.661277] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:24.596 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:24.596 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:24.596 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:24.596 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:24.596 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:24.596 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:24.596 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3221999 00:25:24.596 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3221999 /var/tmp/bdevperf.sock 00:25:24.596 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3221999 ']' 00:25:24.596 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:24.596 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:24.596 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:25:24.596 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:24.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:24.596 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:25:24.596 "subsystems": [ 00:25:24.596 { 00:25:24.596 "subsystem": "keyring", 00:25:24.596 "config": [ 00:25:24.596 { 00:25:24.596 "method": "keyring_file_add_key", 00:25:24.596 "params": { 00:25:24.596 "name": "key0", 00:25:24.596 "path": "/tmp/tmp.5mD0rT7vYi" 00:25:24.596 } 00:25:24.596 } 00:25:24.596 ] 00:25:24.596 }, 00:25:24.596 { 00:25:24.596 "subsystem": "iobuf", 00:25:24.596 "config": [ 00:25:24.596 { 00:25:24.596 "method": "iobuf_set_options", 00:25:24.596 "params": { 00:25:24.596 "small_pool_count": 8192, 00:25:24.596 "large_pool_count": 1024, 00:25:24.596 "small_bufsize": 8192, 00:25:24.596 "large_bufsize": 135168, 00:25:24.596 "enable_numa": false 00:25:24.596 } 00:25:24.596 } 00:25:24.596 ] 00:25:24.596 }, 00:25:24.596 { 00:25:24.596 "subsystem": "sock", 00:25:24.596 "config": [ 00:25:24.596 { 00:25:24.596 "method": "sock_set_default_impl", 00:25:24.596 "params": { 00:25:24.596 "impl_name": "posix" 00:25:24.596 } 00:25:24.596 }, 00:25:24.596 { 00:25:24.596 "method": "sock_impl_set_options", 00:25:24.596 "params": { 00:25:24.596 "impl_name": "ssl", 00:25:24.596 "recv_buf_size": 4096, 00:25:24.596 "send_buf_size": 4096, 00:25:24.596 "enable_recv_pipe": true, 00:25:24.596 "enable_quickack": false, 00:25:24.596 "enable_placement_id": 0, 00:25:24.596 "enable_zerocopy_send_server": true, 00:25:24.596 "enable_zerocopy_send_client": false, 00:25:24.596 "zerocopy_threshold": 0, 00:25:24.596 "tls_version": 0, 00:25:24.596 "enable_ktls": false 00:25:24.596 } 00:25:24.596 }, 00:25:24.596 { 00:25:24.596 "method": "sock_impl_set_options", 00:25:24.596 "params": { 00:25:24.596 "impl_name": "posix", 00:25:24.596 "recv_buf_size": 2097152, 00:25:24.596 "send_buf_size": 2097152, 00:25:24.596 "enable_recv_pipe": true, 00:25:24.596 "enable_quickack": false, 00:25:24.596 "enable_placement_id": 0, 00:25:24.596 "enable_zerocopy_send_server": true, 00:25:24.596 "enable_zerocopy_send_client": false, 00:25:24.596 "zerocopy_threshold": 0, 00:25:24.596 "tls_version": 0, 00:25:24.596 "enable_ktls": false 00:25:24.596 } 00:25:24.596 } 00:25:24.596 ] 00:25:24.596 }, 00:25:24.596 { 00:25:24.596 "subsystem": "vmd", 00:25:24.596 "config": [] 00:25:24.596 }, 00:25:24.596 { 00:25:24.596 "subsystem": "accel", 00:25:24.596 "config": [ 00:25:24.596 { 00:25:24.596 "method": "accel_set_options", 00:25:24.596 "params": { 00:25:24.596 "small_cache_size": 128, 00:25:24.596 "large_cache_size": 16, 00:25:24.596 "task_count": 2048, 00:25:24.596 "sequence_count": 2048, 00:25:24.596 "buf_count": 2048 00:25:24.596 } 00:25:24.596 } 00:25:24.596 ] 00:25:24.596 }, 00:25:24.596 { 00:25:24.596 "subsystem": "bdev", 00:25:24.596 "config": [ 00:25:24.596 { 00:25:24.596 "method": "bdev_set_options", 00:25:24.596 "params": { 00:25:24.596 "bdev_io_pool_size": 65535, 00:25:24.596 "bdev_io_cache_size": 256, 00:25:24.596 "bdev_auto_examine": true, 00:25:24.596 "iobuf_small_cache_size": 128, 00:25:24.596 "iobuf_large_cache_size": 16 00:25:24.596 } 00:25:24.596 }, 00:25:24.596 { 00:25:24.596 "method": "bdev_raid_set_options", 00:25:24.596 "params": { 00:25:24.596 "process_window_size_kb": 1024, 00:25:24.596 "process_max_bandwidth_mb_sec": 0 00:25:24.596 } 00:25:24.596 }, 00:25:24.596 { 00:25:24.596 "method": "bdev_iscsi_set_options", 00:25:24.596 "params": { 00:25:24.596 "timeout_sec": 30 00:25:24.596 } 00:25:24.596 }, 00:25:24.596 { 00:25:24.596 "method": "bdev_nvme_set_options", 00:25:24.596 "params": { 00:25:24.596 "action_on_timeout": "none", 00:25:24.596 "timeout_us": 0, 00:25:24.596 "timeout_admin_us": 0, 00:25:24.596 "keep_alive_timeout_ms": 10000, 00:25:24.596 "arbitration_burst": 0, 00:25:24.596 "low_priority_weight": 0, 00:25:24.596 "medium_priority_weight": 0, 00:25:24.596 "high_priority_weight": 0, 00:25:24.596 "nvme_adminq_poll_period_us": 10000, 00:25:24.596 "nvme_ioq_poll_period_us": 0, 00:25:24.596 "io_queue_requests": 512, 00:25:24.596 "delay_cmd_submit": true, 00:25:24.596 "transport_retry_count": 4, 00:25:24.596 "bdev_retry_count": 3, 00:25:24.596 "transport_ack_timeout": 0, 00:25:24.596 "ctrlr_loss_timeout_sec": 0, 00:25:24.596 "reconnect_delay_sec": 0, 00:25:24.596 "fast_io_fail_timeout_sec": 0, 00:25:24.596 "disable_auto_failback": false, 00:25:24.596 "generate_uuids": false, 00:25:24.596 "transport_tos": 0, 00:25:24.596 "nvme_error_stat": false, 00:25:24.596 "rdma_srq_size": 0, 00:25:24.596 "io_path_stat": false, 00:25:24.596 "allow_accel_sequence": false, 00:25:24.596 "rdma_max_cq_size": 0, 00:25:24.596 "rdma_cm_event_timeout_ms": 0, 00:25:24.596 "dhchap_digests": [ 00:25:24.596 "sha256", 00:25:24.596 "sha384", 00:25:24.596 "sha512" 00:25:24.596 ], 00:25:24.596 "dhchap_dhgroups": [ 00:25:24.596 "null", 00:25:24.596 "ffdhe2048", 00:25:24.596 "ffdhe3072", 00:25:24.596 "ffdhe4096", 00:25:24.596 "ffdhe6144", 00:25:24.596 "ffdhe8192" 00:25:24.596 ] 00:25:24.596 } 00:25:24.596 }, 00:25:24.596 { 00:25:24.596 "method": "bdev_nvme_attach_controller", 00:25:24.596 "params": { 00:25:24.596 "name": "TLSTEST", 00:25:24.596 "trtype": "TCP", 00:25:24.596 "adrfam": "IPv4", 00:25:24.596 "traddr": "10.0.0.2", 00:25:24.596 "trsvcid": "4420", 00:25:24.596 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:24.596 "prchk_reftag": false, 00:25:24.596 "prchk_guard": false, 00:25:24.596 "ctrlr_loss_timeout_sec": 0, 00:25:24.596 "reconnect_delay_sec": 0, 00:25:24.596 "fast_io_fail_timeout_sec": 0, 00:25:24.596 "psk": "key0", 00:25:24.596 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:24.596 "hdgst": false, 00:25:24.596 "ddgst": false, 00:25:24.596 "multipath": "multipath" 00:25:24.596 } 00:25:24.596 }, 00:25:24.596 { 00:25:24.596 "method": "bdev_nvme_set_hotplug", 00:25:24.596 "params": { 00:25:24.596 "period_us": 100000, 00:25:24.596 "enable": false 00:25:24.596 } 00:25:24.596 }, 00:25:24.596 { 00:25:24.596 "method": "bdev_wait_for_examine" 00:25:24.596 } 00:25:24.596 ] 00:25:24.596 }, 00:25:24.596 { 00:25:24.596 "subsystem": "nbd", 00:25:24.596 "config": [] 00:25:24.596 } 00:25:24.596 ] 00:25:24.596 }' 00:25:24.596 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:24.596 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:24.596 [2024-10-28 15:21:11.264294] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:25:24.596 [2024-10-28 15:21:11.264389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3221999 ] 00:25:24.596 [2024-10-28 15:21:11.343241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.596 [2024-10-28 15:21:11.407455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:24.854 [2024-10-28 15:21:11.591431] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:24.854 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:24.854 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:24.854 15:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:25.112 Running I/O for 10 seconds... 00:25:27.424 1537.00 IOPS, 6.00 MiB/s [2024-10-28T14:21:15.230Z] 2252.50 IOPS, 8.80 MiB/s [2024-10-28T14:21:16.171Z] 2009.67 IOPS, 7.85 MiB/s [2024-10-28T14:21:17.110Z] 1900.25 IOPS, 7.42 MiB/s [2024-10-28T14:21:18.048Z] 1865.00 IOPS, 7.29 MiB/s [2024-10-28T14:21:18.985Z] 2016.67 IOPS, 7.88 MiB/s [2024-10-28T14:21:20.364Z] 2010.86 IOPS, 7.85 MiB/s [2024-10-28T14:21:21.305Z] 2089.38 IOPS, 8.16 MiB/s [2024-10-28T14:21:22.243Z] 2025.33 IOPS, 7.91 MiB/s [2024-10-28T14:21:22.243Z] 1977.30 IOPS, 7.72 MiB/s 00:25:35.376 Latency(us) 00:25:35.376 [2024-10-28T14:21:22.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:35.376 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:35.376 Verification LBA range: start 0x0 length 0x2000 00:25:35.376 TLSTESTn1 : 10.05 1981.06 7.74 0.00 0.00 64437.71 13204.29 60584.39 00:25:35.376 [2024-10-28T14:21:22.243Z] =================================================================================================================== 00:25:35.376 [2024-10-28T14:21:22.243Z] Total : 1981.06 7.74 0.00 0.00 64437.71 13204.29 60584.39 00:25:35.376 { 00:25:35.376 "results": [ 00:25:35.377 { 00:25:35.377 "job": "TLSTESTn1", 00:25:35.377 "core_mask": "0x4", 00:25:35.377 "workload": "verify", 00:25:35.377 "status": "finished", 00:25:35.377 "verify_range": { 00:25:35.377 "start": 0, 00:25:35.377 "length": 8192 00:25:35.377 }, 00:25:35.377 "queue_depth": 128, 00:25:35.377 "io_size": 4096, 00:25:35.377 "runtime": 10.045643, 00:25:35.377 "iops": 1981.057857620463, 00:25:35.377 "mibps": 7.7385072563299335, 00:25:35.377 "io_failed": 0, 00:25:35.377 "io_timeout": 0, 00:25:35.377 "avg_latency_us": 64437.71183744722, 00:25:35.377 "min_latency_us": 13204.29037037037, 00:25:35.377 "max_latency_us": 60584.39111111111 00:25:35.377 } 00:25:35.377 ], 00:25:35.377 "core_count": 1 00:25:35.377 } 00:25:35.377 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:35.377 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3221999 00:25:35.377 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3221999 ']' 00:25:35.377 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3221999 00:25:35.377 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:35.377 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:35.377 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3221999 00:25:35.377 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:35.377 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:35.377 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3221999' 00:25:35.377 killing process with pid 3221999 00:25:35.377 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3221999 00:25:35.377 Received shutdown signal, test time was about 10.000000 seconds 00:25:35.377 00:25:35.377 Latency(us) 00:25:35.377 [2024-10-28T14:21:22.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:35.377 [2024-10-28T14:21:22.244Z] =================================================================================================================== 00:25:35.377 [2024-10-28T14:21:22.244Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:35.377 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3221999 00:25:35.636 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3221746 00:25:35.636 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3221746 ']' 00:25:35.636 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3221746 00:25:35.636 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:35.636 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:35.636 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3221746 00:25:35.636 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:35.636 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:35.636 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3221746' 00:25:35.636 killing process with pid 3221746 00:25:35.636 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3221746 00:25:35.636 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3221746 00:25:35.895 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:25:35.895 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:35.895 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:35.895 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:35.895 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3223296 00:25:35.895 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:35.895 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3223296 00:25:35.895 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3223296 ']' 00:25:35.895 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:35.895 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:35.895 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:35.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:35.895 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:35.895 15:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:36.153 [2024-10-28 15:21:22.814813] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:25:36.153 [2024-10-28 15:21:22.814900] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:36.153 [2024-10-28 15:21:22.902245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.153 [2024-10-28 15:21:22.965843] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:36.153 [2024-10-28 15:21:22.965917] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:36.153 [2024-10-28 15:21:22.965931] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:36.153 [2024-10-28 15:21:22.965942] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:36.153 [2024-10-28 15:21:22.965952] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:36.153 [2024-10-28 15:21:22.966565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.722 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:36.722 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:36.722 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:36.722 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:36.722 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:36.722 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:36.722 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.5mD0rT7vYi 00:25:36.722 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.5mD0rT7vYi 00:25:36.722 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:36.983 [2024-10-28 15:21:23.672408] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:36.983 15:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:37.244 15:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:37.814 [2024-10-28 15:21:24.552089] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:37.814 [2024-10-28 15:21:24.552561] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:37.814 15:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:38.385 malloc0 00:25:38.385 15:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:38.645 15:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.5mD0rT7vYi 00:25:39.586 15:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:39.849 15:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3223785 00:25:39.849 15:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:39.849 15:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:39.849 15:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3223785 /var/tmp/bdevperf.sock 00:25:39.849 15:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3223785 ']' 00:25:39.849 15:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:39.849 15:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:39.849 15:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:39.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:39.849 15:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:39.849 15:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:39.849 [2024-10-28 15:21:26.547402] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:25:39.849 [2024-10-28 15:21:26.547515] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3223785 ] 00:25:39.849 [2024-10-28 15:21:26.685988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.111 [2024-10-28 15:21:26.808244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.372 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:40.372 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:40.372 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5mD0rT7vYi 00:25:40.632 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:41.208 [2024-10-28 15:21:27.844158] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:41.208 nvme0n1 00:25:41.208 15:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:41.470 Running I/O for 1 seconds... 00:25:42.412 1460.00 IOPS, 5.70 MiB/s 00:25:42.412 Latency(us) 00:25:42.412 [2024-10-28T14:21:29.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.412 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:42.412 Verification LBA range: start 0x0 length 0x2000 00:25:42.412 nvme0n1 : 1.04 1521.00 5.94 0.00 0.00 82446.97 12815.93 60196.03 00:25:42.412 [2024-10-28T14:21:29.279Z] =================================================================================================================== 00:25:42.412 [2024-10-28T14:21:29.279Z] Total : 1521.00 5.94 0.00 0.00 82446.97 12815.93 60196.03 00:25:42.413 { 00:25:42.413 "results": [ 00:25:42.413 { 00:25:42.413 "job": "nvme0n1", 00:25:42.413 "core_mask": "0x2", 00:25:42.413 "workload": "verify", 00:25:42.413 "status": "finished", 00:25:42.413 "verify_range": { 00:25:42.413 "start": 0, 00:25:42.413 "length": 8192 00:25:42.413 }, 00:25:42.413 "queue_depth": 128, 00:25:42.413 "io_size": 4096, 00:25:42.413 "runtime": 1.04405, 00:25:42.413 "iops": 1520.9999521095733, 00:25:42.413 "mibps": 5.941406062928021, 00:25:42.413 "io_failed": 0, 00:25:42.413 "io_timeout": 0, 00:25:42.413 "avg_latency_us": 82446.9659333893, 00:25:42.413 "min_latency_us": 12815.92888888889, 00:25:42.413 "max_latency_us": 60196.02962962963 00:25:42.413 } 00:25:42.413 ], 00:25:42.413 "core_count": 1 00:25:42.413 } 00:25:42.673 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3223785 00:25:42.673 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3223785 ']' 00:25:42.673 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3223785 00:25:42.673 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:42.673 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:42.673 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3223785 00:25:42.673 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:42.673 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:42.673 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3223785' 00:25:42.673 killing process with pid 3223785 00:25:42.673 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3223785 00:25:42.673 Received shutdown signal, test time was about 1.000000 seconds 00:25:42.673 00:25:42.673 Latency(us) 00:25:42.673 [2024-10-28T14:21:29.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.673 [2024-10-28T14:21:29.540Z] =================================================================================================================== 00:25:42.673 [2024-10-28T14:21:29.540Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:42.673 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3223785 00:25:42.931 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3223296 00:25:42.931 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3223296 ']' 00:25:42.931 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3223296 00:25:42.931 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:42.931 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:42.931 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3223296 00:25:43.191 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:43.191 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:43.191 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3223296' 00:25:43.191 killing process with pid 3223296 00:25:43.191 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3223296 00:25:43.191 15:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3223296 00:25:43.452 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:25:43.452 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:43.452 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:43.452 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:43.452 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3224191 00:25:43.452 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:43.452 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3224191 00:25:43.452 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3224191 ']' 00:25:43.452 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:43.452 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:43.452 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:43.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:43.452 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:43.452 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:43.452 [2024-10-28 15:21:30.235467] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:25:43.452 [2024-10-28 15:21:30.235675] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:43.714 [2024-10-28 15:21:30.408200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.714 [2024-10-28 15:21:30.523627] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:43.714 [2024-10-28 15:21:30.523769] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:43.714 [2024-10-28 15:21:30.523805] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:43.714 [2024-10-28 15:21:30.523835] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:43.714 [2024-10-28 15:21:30.523861] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:43.714 [2024-10-28 15:21:30.525236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.286 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:44.287 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:44.287 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:44.287 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:44.287 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:44.287 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:44.287 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:25:44.287 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.287 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:44.287 [2024-10-28 15:21:30.906097] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:44.287 malloc0 00:25:44.287 [2024-10-28 15:21:30.946135] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:44.287 [2024-10-28 15:21:30.946618] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:44.287 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.287 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3224242 00:25:44.287 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:44.287 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3224242 /var/tmp/bdevperf.sock 00:25:44.287 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3224242 ']' 00:25:44.287 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:44.287 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:44.287 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:44.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:44.287 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:44.287 15:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:44.287 [2024-10-28 15:21:31.079821] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:25:44.287 [2024-10-28 15:21:31.080004] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3224242 ] 00:25:44.566 [2024-10-28 15:21:31.249527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.566 [2024-10-28 15:21:31.368619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.568 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:45.568 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:45.568 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5mD0rT7vYi 00:25:46.138 15:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:46.707 [2024-10-28 15:21:33.372892] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:46.707 nvme0n1 00:25:46.707 15:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:46.968 Running I/O for 1 seconds... 00:25:47.905 1447.00 IOPS, 5.65 MiB/s 00:25:47.905 Latency(us) 00:25:47.905 [2024-10-28T14:21:34.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:47.905 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:47.905 Verification LBA range: start 0x0 length 0x2000 00:25:47.905 nvme0n1 : 1.05 1508.02 5.89 0.00 0.00 83387.12 9466.31 63302.92 00:25:47.905 [2024-10-28T14:21:34.772Z] =================================================================================================================== 00:25:47.905 [2024-10-28T14:21:34.772Z] Total : 1508.02 5.89 0.00 0.00 83387.12 9466.31 63302.92 00:25:47.905 { 00:25:47.905 "results": [ 00:25:47.905 { 00:25:47.906 "job": "nvme0n1", 00:25:47.906 "core_mask": "0x2", 00:25:47.906 "workload": "verify", 00:25:47.906 "status": "finished", 00:25:47.906 "verify_range": { 00:25:47.906 "start": 0, 00:25:47.906 "length": 8192 00:25:47.906 }, 00:25:47.906 "queue_depth": 128, 00:25:47.906 "io_size": 4096, 00:25:47.906 "runtime": 1.045081, 00:25:47.906 "iops": 1508.0170819295347, 00:25:47.906 "mibps": 5.890691726287245, 00:25:47.906 "io_failed": 0, 00:25:47.906 "io_timeout": 0, 00:25:47.906 "avg_latency_us": 83387.12112803159, 00:25:47.906 "min_latency_us": 9466.31111111111, 00:25:47.906 "max_latency_us": 63302.921481481484 00:25:47.906 } 00:25:47.906 ], 00:25:47.906 "core_count": 1 00:25:47.906 } 00:25:47.906 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:25:47.906 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.906 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:48.165 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.165 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:25:48.165 "subsystems": [ 00:25:48.165 { 00:25:48.165 "subsystem": "keyring", 00:25:48.165 "config": [ 00:25:48.165 { 00:25:48.165 "method": "keyring_file_add_key", 00:25:48.165 "params": { 00:25:48.165 "name": "key0", 00:25:48.165 "path": "/tmp/tmp.5mD0rT7vYi" 00:25:48.165 } 00:25:48.165 } 00:25:48.165 ] 00:25:48.165 }, 00:25:48.165 { 00:25:48.165 "subsystem": "iobuf", 00:25:48.165 "config": [ 00:25:48.165 { 00:25:48.165 "method": "iobuf_set_options", 00:25:48.165 "params": { 00:25:48.165 "small_pool_count": 8192, 00:25:48.165 "large_pool_count": 1024, 00:25:48.165 "small_bufsize": 8192, 00:25:48.165 "large_bufsize": 135168, 00:25:48.165 "enable_numa": false 00:25:48.165 } 00:25:48.165 } 00:25:48.165 ] 00:25:48.165 }, 00:25:48.165 { 00:25:48.165 "subsystem": "sock", 00:25:48.165 "config": [ 00:25:48.166 { 00:25:48.166 "method": "sock_set_default_impl", 00:25:48.166 "params": { 00:25:48.166 "impl_name": "posix" 00:25:48.166 } 00:25:48.166 }, 00:25:48.166 { 00:25:48.166 "method": "sock_impl_set_options", 00:25:48.166 "params": { 00:25:48.166 "impl_name": "ssl", 00:25:48.166 "recv_buf_size": 4096, 00:25:48.166 "send_buf_size": 4096, 00:25:48.166 "enable_recv_pipe": true, 00:25:48.166 "enable_quickack": false, 00:25:48.166 "enable_placement_id": 0, 00:25:48.166 "enable_zerocopy_send_server": true, 00:25:48.166 "enable_zerocopy_send_client": false, 00:25:48.166 "zerocopy_threshold": 0, 00:25:48.166 "tls_version": 0, 00:25:48.166 "enable_ktls": false 00:25:48.166 } 00:25:48.166 }, 00:25:48.166 { 00:25:48.166 "method": "sock_impl_set_options", 00:25:48.166 "params": { 00:25:48.166 "impl_name": "posix", 00:25:48.166 "recv_buf_size": 2097152, 00:25:48.166 "send_buf_size": 2097152, 00:25:48.166 "enable_recv_pipe": true, 00:25:48.166 "enable_quickack": false, 00:25:48.166 "enable_placement_id": 0, 00:25:48.166 "enable_zerocopy_send_server": true, 00:25:48.166 "enable_zerocopy_send_client": false, 00:25:48.166 "zerocopy_threshold": 0, 00:25:48.166 "tls_version": 0, 00:25:48.166 "enable_ktls": false 00:25:48.166 } 00:25:48.166 } 00:25:48.166 ] 00:25:48.166 }, 00:25:48.166 { 00:25:48.166 "subsystem": "vmd", 00:25:48.166 "config": [] 00:25:48.166 }, 00:25:48.166 { 00:25:48.166 "subsystem": "accel", 00:25:48.166 "config": [ 00:25:48.166 { 00:25:48.166 "method": "accel_set_options", 00:25:48.166 "params": { 00:25:48.166 "small_cache_size": 128, 00:25:48.166 "large_cache_size": 16, 00:25:48.166 "task_count": 2048, 00:25:48.166 "sequence_count": 2048, 00:25:48.166 "buf_count": 2048 00:25:48.166 } 00:25:48.166 } 00:25:48.166 ] 00:25:48.166 }, 00:25:48.166 { 00:25:48.166 "subsystem": "bdev", 00:25:48.166 "config": [ 00:25:48.166 { 00:25:48.166 "method": "bdev_set_options", 00:25:48.166 "params": { 00:25:48.166 "bdev_io_pool_size": 65535, 00:25:48.166 "bdev_io_cache_size": 256, 00:25:48.166 "bdev_auto_examine": true, 00:25:48.166 "iobuf_small_cache_size": 128, 00:25:48.166 "iobuf_large_cache_size": 16 00:25:48.166 } 00:25:48.166 }, 00:25:48.166 { 00:25:48.166 "method": "bdev_raid_set_options", 00:25:48.166 "params": { 00:25:48.166 "process_window_size_kb": 1024, 00:25:48.166 "process_max_bandwidth_mb_sec": 0 00:25:48.166 } 00:25:48.166 }, 00:25:48.166 { 00:25:48.166 "method": "bdev_iscsi_set_options", 00:25:48.166 "params": { 00:25:48.166 "timeout_sec": 30 00:25:48.166 } 00:25:48.166 }, 00:25:48.166 { 00:25:48.166 "method": "bdev_nvme_set_options", 00:25:48.166 "params": { 00:25:48.166 "action_on_timeout": "none", 00:25:48.166 "timeout_us": 0, 00:25:48.166 "timeout_admin_us": 0, 00:25:48.166 "keep_alive_timeout_ms": 10000, 00:25:48.166 "arbitration_burst": 0, 00:25:48.166 "low_priority_weight": 0, 00:25:48.166 "medium_priority_weight": 0, 00:25:48.166 "high_priority_weight": 0, 00:25:48.166 "nvme_adminq_poll_period_us": 10000, 00:25:48.166 "nvme_ioq_poll_period_us": 0, 00:25:48.166 "io_queue_requests": 0, 00:25:48.166 "delay_cmd_submit": true, 00:25:48.166 "transport_retry_count": 4, 00:25:48.166 "bdev_retry_count": 3, 00:25:48.166 "transport_ack_timeout": 0, 00:25:48.166 "ctrlr_loss_timeout_sec": 0, 00:25:48.166 "reconnect_delay_sec": 0, 00:25:48.166 "fast_io_fail_timeout_sec": 0, 00:25:48.166 "disable_auto_failback": false, 00:25:48.166 "generate_uuids": false, 00:25:48.166 "transport_tos": 0, 00:25:48.166 "nvme_error_stat": false, 00:25:48.166 "rdma_srq_size": 0, 00:25:48.166 "io_path_stat": false, 00:25:48.166 "allow_accel_sequence": false, 00:25:48.166 "rdma_max_cq_size": 0, 00:25:48.166 "rdma_cm_event_timeout_ms": 0, 00:25:48.166 "dhchap_digests": [ 00:25:48.166 "sha256", 00:25:48.166 "sha384", 00:25:48.166 "sha512" 00:25:48.166 ], 00:25:48.166 "dhchap_dhgroups": [ 00:25:48.166 "null", 00:25:48.166 "ffdhe2048", 00:25:48.166 "ffdhe3072", 00:25:48.166 "ffdhe4096", 00:25:48.166 "ffdhe6144", 00:25:48.166 "ffdhe8192" 00:25:48.166 ] 00:25:48.166 } 00:25:48.166 }, 00:25:48.166 { 00:25:48.166 "method": "bdev_nvme_set_hotplug", 00:25:48.166 "params": { 00:25:48.166 "period_us": 100000, 00:25:48.166 "enable": false 00:25:48.166 } 00:25:48.166 }, 00:25:48.166 { 00:25:48.166 "method": "bdev_malloc_create", 00:25:48.166 "params": { 00:25:48.166 "name": "malloc0", 00:25:48.166 "num_blocks": 8192, 00:25:48.166 "block_size": 4096, 00:25:48.166 "physical_block_size": 4096, 00:25:48.166 "uuid": "c8861cd4-7404-413d-9a5e-fe0d57208bd5", 00:25:48.166 "optimal_io_boundary": 0, 00:25:48.166 "md_size": 0, 00:25:48.166 "dif_type": 0, 00:25:48.166 "dif_is_head_of_md": false, 00:25:48.166 "dif_pi_format": 0 00:25:48.166 } 00:25:48.166 }, 00:25:48.166 { 00:25:48.166 "method": "bdev_wait_for_examine" 00:25:48.166 } 00:25:48.166 ] 00:25:48.166 }, 00:25:48.166 { 00:25:48.166 "subsystem": "nbd", 00:25:48.166 "config": [] 00:25:48.166 }, 00:25:48.166 { 00:25:48.166 "subsystem": "scheduler", 00:25:48.166 "config": [ 00:25:48.166 { 00:25:48.166 "method": "framework_set_scheduler", 00:25:48.166 "params": { 00:25:48.166 "name": "static" 00:25:48.166 } 00:25:48.166 } 00:25:48.166 ] 00:25:48.166 }, 00:25:48.166 { 00:25:48.166 "subsystem": "nvmf", 00:25:48.166 "config": [ 00:25:48.166 { 00:25:48.166 "method": "nvmf_set_config", 00:25:48.166 "params": { 00:25:48.166 "discovery_filter": "match_any", 00:25:48.166 "admin_cmd_passthru": { 00:25:48.166 "identify_ctrlr": false 00:25:48.166 }, 00:25:48.166 "dhchap_digests": [ 00:25:48.166 "sha256", 00:25:48.166 "sha384", 00:25:48.166 "sha512" 00:25:48.166 ], 00:25:48.166 "dhchap_dhgroups": [ 00:25:48.166 "null", 00:25:48.166 "ffdhe2048", 00:25:48.166 "ffdhe3072", 00:25:48.166 "ffdhe4096", 00:25:48.166 "ffdhe6144", 00:25:48.166 "ffdhe8192" 00:25:48.166 ] 00:25:48.167 } 00:25:48.167 }, 00:25:48.167 { 00:25:48.167 "method": "nvmf_set_max_subsystems", 00:25:48.167 "params": { 00:25:48.167 "max_subsystems": 1024 00:25:48.167 } 00:25:48.167 }, 00:25:48.167 { 00:25:48.167 "method": "nvmf_set_crdt", 00:25:48.167 "params": { 00:25:48.167 "crdt1": 0, 00:25:48.167 "crdt2": 0, 00:25:48.167 "crdt3": 0 00:25:48.167 } 00:25:48.167 }, 00:25:48.167 { 00:25:48.167 "method": "nvmf_create_transport", 00:25:48.167 "params": { 00:25:48.167 "trtype": "TCP", 00:25:48.167 "max_queue_depth": 128, 00:25:48.167 "max_io_qpairs_per_ctrlr": 127, 00:25:48.167 "in_capsule_data_size": 4096, 00:25:48.167 "max_io_size": 131072, 00:25:48.167 "io_unit_size": 131072, 00:25:48.167 "max_aq_depth": 128, 00:25:48.167 "num_shared_buffers": 511, 00:25:48.167 "buf_cache_size": 4294967295, 00:25:48.167 "dif_insert_or_strip": false, 00:25:48.167 "zcopy": false, 00:25:48.167 "c2h_success": false, 00:25:48.167 "sock_priority": 0, 00:25:48.167 "abort_timeout_sec": 1, 00:25:48.167 "ack_timeout": 0, 00:25:48.167 "data_wr_pool_size": 0 00:25:48.167 } 00:25:48.167 }, 00:25:48.167 { 00:25:48.167 "method": "nvmf_create_subsystem", 00:25:48.167 "params": { 00:25:48.167 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:48.167 "allow_any_host": false, 00:25:48.167 "serial_number": "00000000000000000000", 00:25:48.167 "model_number": "SPDK bdev Controller", 00:25:48.167 "max_namespaces": 32, 00:25:48.167 "min_cntlid": 1, 00:25:48.167 "max_cntlid": 65519, 00:25:48.167 "ana_reporting": false 00:25:48.167 } 00:25:48.167 }, 00:25:48.167 { 00:25:48.167 "method": "nvmf_subsystem_add_host", 00:25:48.167 "params": { 00:25:48.167 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:48.167 "host": "nqn.2016-06.io.spdk:host1", 00:25:48.167 "psk": "key0" 00:25:48.167 } 00:25:48.167 }, 00:25:48.167 { 00:25:48.167 "method": "nvmf_subsystem_add_ns", 00:25:48.167 "params": { 00:25:48.167 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:48.167 "namespace": { 00:25:48.167 "nsid": 1, 00:25:48.167 "bdev_name": "malloc0", 00:25:48.167 "nguid": "C8861CD47404413D9A5EFE0D57208BD5", 00:25:48.167 "uuid": "c8861cd4-7404-413d-9a5e-fe0d57208bd5", 00:25:48.167 "no_auto_visible": false 00:25:48.167 } 00:25:48.167 } 00:25:48.167 }, 00:25:48.167 { 00:25:48.167 "method": "nvmf_subsystem_add_listener", 00:25:48.167 "params": { 00:25:48.167 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:48.167 "listen_address": { 00:25:48.167 "trtype": "TCP", 00:25:48.167 "adrfam": "IPv4", 00:25:48.167 "traddr": "10.0.0.2", 00:25:48.167 "trsvcid": "4420" 00:25:48.167 }, 00:25:48.167 "secure_channel": false, 00:25:48.167 "sock_impl": "ssl" 00:25:48.167 } 00:25:48.167 } 00:25:48.167 ] 00:25:48.167 } 00:25:48.167 ] 00:25:48.167 }' 00:25:48.167 15:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:48.735 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:25:48.735 "subsystems": [ 00:25:48.735 { 00:25:48.735 "subsystem": "keyring", 00:25:48.735 "config": [ 00:25:48.735 { 00:25:48.735 "method": "keyring_file_add_key", 00:25:48.735 "params": { 00:25:48.735 "name": "key0", 00:25:48.735 "path": "/tmp/tmp.5mD0rT7vYi" 00:25:48.735 } 00:25:48.735 } 00:25:48.735 ] 00:25:48.735 }, 00:25:48.735 { 00:25:48.735 "subsystem": "iobuf", 00:25:48.735 "config": [ 00:25:48.735 { 00:25:48.735 "method": "iobuf_set_options", 00:25:48.735 "params": { 00:25:48.735 "small_pool_count": 8192, 00:25:48.735 "large_pool_count": 1024, 00:25:48.735 "small_bufsize": 8192, 00:25:48.735 "large_bufsize": 135168, 00:25:48.735 "enable_numa": false 00:25:48.735 } 00:25:48.735 } 00:25:48.735 ] 00:25:48.735 }, 00:25:48.735 { 00:25:48.735 "subsystem": "sock", 00:25:48.735 "config": [ 00:25:48.735 { 00:25:48.735 "method": "sock_set_default_impl", 00:25:48.735 "params": { 00:25:48.735 "impl_name": "posix" 00:25:48.735 } 00:25:48.735 }, 00:25:48.735 { 00:25:48.735 "method": "sock_impl_set_options", 00:25:48.735 "params": { 00:25:48.735 "impl_name": "ssl", 00:25:48.735 "recv_buf_size": 4096, 00:25:48.735 "send_buf_size": 4096, 00:25:48.735 "enable_recv_pipe": true, 00:25:48.735 "enable_quickack": false, 00:25:48.735 "enable_placement_id": 0, 00:25:48.735 "enable_zerocopy_send_server": true, 00:25:48.735 "enable_zerocopy_send_client": false, 00:25:48.735 "zerocopy_threshold": 0, 00:25:48.735 "tls_version": 0, 00:25:48.735 "enable_ktls": false 00:25:48.735 } 00:25:48.735 }, 00:25:48.735 { 00:25:48.735 "method": "sock_impl_set_options", 00:25:48.735 "params": { 00:25:48.735 "impl_name": "posix", 00:25:48.735 "recv_buf_size": 2097152, 00:25:48.735 "send_buf_size": 2097152, 00:25:48.735 "enable_recv_pipe": true, 00:25:48.735 "enable_quickack": false, 00:25:48.735 "enable_placement_id": 0, 00:25:48.735 "enable_zerocopy_send_server": true, 00:25:48.735 "enable_zerocopy_send_client": false, 00:25:48.735 "zerocopy_threshold": 0, 00:25:48.735 "tls_version": 0, 00:25:48.735 "enable_ktls": false 00:25:48.735 } 00:25:48.735 } 00:25:48.735 ] 00:25:48.735 }, 00:25:48.735 { 00:25:48.735 "subsystem": "vmd", 00:25:48.735 "config": [] 00:25:48.735 }, 00:25:48.735 { 00:25:48.735 "subsystem": "accel", 00:25:48.735 "config": [ 00:25:48.735 { 00:25:48.735 "method": "accel_set_options", 00:25:48.735 "params": { 00:25:48.735 "small_cache_size": 128, 00:25:48.735 "large_cache_size": 16, 00:25:48.735 "task_count": 2048, 00:25:48.735 "sequence_count": 2048, 00:25:48.735 "buf_count": 2048 00:25:48.735 } 00:25:48.735 } 00:25:48.735 ] 00:25:48.735 }, 00:25:48.735 { 00:25:48.735 "subsystem": "bdev", 00:25:48.735 "config": [ 00:25:48.735 { 00:25:48.735 "method": "bdev_set_options", 00:25:48.735 "params": { 00:25:48.735 "bdev_io_pool_size": 65535, 00:25:48.735 "bdev_io_cache_size": 256, 00:25:48.735 "bdev_auto_examine": true, 00:25:48.735 "iobuf_small_cache_size": 128, 00:25:48.735 "iobuf_large_cache_size": 16 00:25:48.735 } 00:25:48.735 }, 00:25:48.735 { 00:25:48.735 "method": "bdev_raid_set_options", 00:25:48.735 "params": { 00:25:48.735 "process_window_size_kb": 1024, 00:25:48.735 "process_max_bandwidth_mb_sec": 0 00:25:48.735 } 00:25:48.735 }, 00:25:48.735 { 00:25:48.735 "method": "bdev_iscsi_set_options", 00:25:48.735 "params": { 00:25:48.735 "timeout_sec": 30 00:25:48.735 } 00:25:48.735 }, 00:25:48.735 { 00:25:48.735 "method": "bdev_nvme_set_options", 00:25:48.735 "params": { 00:25:48.735 "action_on_timeout": "none", 00:25:48.735 "timeout_us": 0, 00:25:48.735 "timeout_admin_us": 0, 00:25:48.735 "keep_alive_timeout_ms": 10000, 00:25:48.735 "arbitration_burst": 0, 00:25:48.735 "low_priority_weight": 0, 00:25:48.735 "medium_priority_weight": 0, 00:25:48.735 "high_priority_weight": 0, 00:25:48.735 "nvme_adminq_poll_period_us": 10000, 00:25:48.735 "nvme_ioq_poll_period_us": 0, 00:25:48.735 "io_queue_requests": 512, 00:25:48.735 "delay_cmd_submit": true, 00:25:48.735 "transport_retry_count": 4, 00:25:48.735 "bdev_retry_count": 3, 00:25:48.735 "transport_ack_timeout": 0, 00:25:48.735 "ctrlr_loss_timeout_sec": 0, 00:25:48.736 "reconnect_delay_sec": 0, 00:25:48.736 "fast_io_fail_timeout_sec": 0, 00:25:48.736 "disable_auto_failback": false, 00:25:48.736 "generate_uuids": false, 00:25:48.736 "transport_tos": 0, 00:25:48.736 "nvme_error_stat": false, 00:25:48.736 "rdma_srq_size": 0, 00:25:48.736 "io_path_stat": false, 00:25:48.736 "allow_accel_sequence": false, 00:25:48.736 "rdma_max_cq_size": 0, 00:25:48.736 "rdma_cm_event_timeout_ms": 0, 00:25:48.736 "dhchap_digests": [ 00:25:48.736 "sha256", 00:25:48.736 "sha384", 00:25:48.736 "sha512" 00:25:48.736 ], 00:25:48.736 "dhchap_dhgroups": [ 00:25:48.736 "null", 00:25:48.736 "ffdhe2048", 00:25:48.736 "ffdhe3072", 00:25:48.736 "ffdhe4096", 00:25:48.736 "ffdhe6144", 00:25:48.736 "ffdhe8192" 00:25:48.736 ] 00:25:48.736 } 00:25:48.736 }, 00:25:48.736 { 00:25:48.736 "method": "bdev_nvme_attach_controller", 00:25:48.736 "params": { 00:25:48.736 "name": "nvme0", 00:25:48.736 "trtype": "TCP", 00:25:48.736 "adrfam": "IPv4", 00:25:48.736 "traddr": "10.0.0.2", 00:25:48.736 "trsvcid": "4420", 00:25:48.736 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:48.736 "prchk_reftag": false, 00:25:48.736 "prchk_guard": false, 00:25:48.736 "ctrlr_loss_timeout_sec": 0, 00:25:48.736 "reconnect_delay_sec": 0, 00:25:48.736 "fast_io_fail_timeout_sec": 0, 00:25:48.736 "psk": "key0", 00:25:48.736 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:48.736 "hdgst": false, 00:25:48.736 "ddgst": false, 00:25:48.736 "multipath": "multipath" 00:25:48.736 } 00:25:48.736 }, 00:25:48.736 { 00:25:48.736 "method": "bdev_nvme_set_hotplug", 00:25:48.736 "params": { 00:25:48.736 "period_us": 100000, 00:25:48.736 "enable": false 00:25:48.736 } 00:25:48.736 }, 00:25:48.736 { 00:25:48.736 "method": "bdev_enable_histogram", 00:25:48.736 "params": { 00:25:48.736 "name": "nvme0n1", 00:25:48.736 "enable": true 00:25:48.736 } 00:25:48.736 }, 00:25:48.736 { 00:25:48.736 "method": "bdev_wait_for_examine" 00:25:48.736 } 00:25:48.736 ] 00:25:48.736 }, 00:25:48.736 { 00:25:48.736 "subsystem": "nbd", 00:25:48.736 "config": [] 00:25:48.736 } 00:25:48.736 ] 00:25:48.736 }' 00:25:48.736 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3224242 00:25:48.736 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3224242 ']' 00:25:48.736 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3224242 00:25:48.736 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:48.736 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:48.736 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3224242 00:25:48.736 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:48.736 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:48.736 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3224242' 00:25:48.736 killing process with pid 3224242 00:25:48.736 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3224242 00:25:48.736 Received shutdown signal, test time was about 1.000000 seconds 00:25:48.736 00:25:48.736 Latency(us) 00:25:48.736 [2024-10-28T14:21:35.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.736 [2024-10-28T14:21:35.603Z] =================================================================================================================== 00:25:48.736 [2024-10-28T14:21:35.603Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:48.736 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3224242 00:25:48.994 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3224191 00:25:48.994 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3224191 ']' 00:25:48.994 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3224191 00:25:48.994 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:48.994 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:48.994 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3224191 00:25:48.994 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:48.994 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:48.994 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3224191' 00:25:48.994 killing process with pid 3224191 00:25:48.994 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3224191 00:25:48.994 15:21:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3224191 00:25:49.272 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:25:49.272 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:49.272 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:25:49.272 "subsystems": [ 00:25:49.272 { 00:25:49.272 "subsystem": "keyring", 00:25:49.272 "config": [ 00:25:49.272 { 00:25:49.272 "method": "keyring_file_add_key", 00:25:49.272 "params": { 00:25:49.272 "name": "key0", 00:25:49.272 "path": "/tmp/tmp.5mD0rT7vYi" 00:25:49.272 } 00:25:49.272 } 00:25:49.272 ] 00:25:49.272 }, 00:25:49.272 { 00:25:49.272 "subsystem": "iobuf", 00:25:49.272 "config": [ 00:25:49.272 { 00:25:49.272 "method": "iobuf_set_options", 00:25:49.273 "params": { 00:25:49.273 "small_pool_count": 8192, 00:25:49.273 "large_pool_count": 1024, 00:25:49.273 "small_bufsize": 8192, 00:25:49.273 "large_bufsize": 135168, 00:25:49.273 "enable_numa": false 00:25:49.273 } 00:25:49.273 } 00:25:49.273 ] 00:25:49.273 }, 00:25:49.273 { 00:25:49.273 "subsystem": "sock", 00:25:49.273 "config": [ 00:25:49.273 { 00:25:49.273 "method": "sock_set_default_impl", 00:25:49.273 "params": { 00:25:49.273 "impl_name": "posix" 00:25:49.273 } 00:25:49.273 }, 00:25:49.273 { 00:25:49.273 "method": "sock_impl_set_options", 00:25:49.273 "params": { 00:25:49.273 "impl_name": "ssl", 00:25:49.273 "recv_buf_size": 4096, 00:25:49.273 "send_buf_size": 4096, 00:25:49.273 "enable_recv_pipe": true, 00:25:49.273 "enable_quickack": false, 00:25:49.273 "enable_placement_id": 0, 00:25:49.273 "enable_zerocopy_send_server": true, 00:25:49.273 "enable_zerocopy_send_client": false, 00:25:49.273 "zerocopy_threshold": 0, 00:25:49.273 "tls_version": 0, 00:25:49.273 "enable_ktls": false 00:25:49.273 } 00:25:49.273 }, 00:25:49.273 { 00:25:49.273 "method": "sock_impl_set_options", 00:25:49.273 "params": { 00:25:49.273 "impl_name": "posix", 00:25:49.273 "recv_buf_size": 2097152, 00:25:49.273 "send_buf_size": 2097152, 00:25:49.273 "enable_recv_pipe": true, 00:25:49.273 "enable_quickack": false, 00:25:49.273 "enable_placement_id": 0, 00:25:49.273 "enable_zerocopy_send_server": true, 00:25:49.273 "enable_zerocopy_send_client": false, 00:25:49.273 "zerocopy_threshold": 0, 00:25:49.273 "tls_version": 0, 00:25:49.273 "enable_ktls": false 00:25:49.273 } 00:25:49.273 } 00:25:49.273 ] 00:25:49.273 }, 00:25:49.273 { 00:25:49.273 "subsystem": "vmd", 00:25:49.273 "config": [] 00:25:49.273 }, 00:25:49.273 { 00:25:49.273 "subsystem": "accel", 00:25:49.273 "config": [ 00:25:49.273 { 00:25:49.273 "method": "accel_set_options", 00:25:49.273 "params": { 00:25:49.273 "small_cache_size": 128, 00:25:49.273 "large_cache_size": 16, 00:25:49.273 "task_count": 2048, 00:25:49.273 "sequence_count": 2048, 00:25:49.273 "buf_count": 2048 00:25:49.273 } 00:25:49.273 } 00:25:49.273 ] 00:25:49.273 }, 00:25:49.273 { 00:25:49.273 "subsystem": "bdev", 00:25:49.273 "config": [ 00:25:49.273 { 00:25:49.273 "method": "bdev_set_options", 00:25:49.273 "params": { 00:25:49.273 "bdev_io_pool_size": 65535, 00:25:49.273 "bdev_io_cache_size": 256, 00:25:49.273 "bdev_auto_examine": true, 00:25:49.273 "iobuf_small_cache_size": 128, 00:25:49.273 "iobuf_large_cache_size": 16 00:25:49.273 } 00:25:49.273 }, 00:25:49.273 { 00:25:49.273 "method": "bdev_raid_set_options", 00:25:49.273 "params": { 00:25:49.273 "process_window_size_kb": 1024, 00:25:49.273 "process_max_bandwidth_mb_sec": 0 00:25:49.273 } 00:25:49.273 }, 00:25:49.273 { 00:25:49.273 "method": "bdev_iscsi_set_options", 00:25:49.273 "params": { 00:25:49.273 "timeout_sec": 30 00:25:49.273 } 00:25:49.273 }, 00:25:49.273 { 00:25:49.273 "method": "bdev_nvme_set_options", 00:25:49.273 "params": { 00:25:49.273 "action_on_timeout": "none", 00:25:49.273 "timeout_us": 0, 00:25:49.273 "timeout_admin_us": 0, 00:25:49.273 "keep_alive_timeout_ms": 10000, 00:25:49.273 "arbitration_burst": 0, 00:25:49.273 "low_priority_weight": 0, 00:25:49.273 "medium_priority_weight": 0, 00:25:49.273 "high_priority_weight": 0, 00:25:49.273 "nvme_adminq_poll_period_us": 10000, 00:25:49.273 "nvme_ioq_poll_period_us": 0, 00:25:49.273 "io_queue_requests": 0, 00:25:49.273 "delay_cmd_submit": true, 00:25:49.273 "transport_retry_count": 4, 00:25:49.273 "bdev_retry_count": 3, 00:25:49.273 "transport_ack_timeout": 0, 00:25:49.273 "ctrlr_loss_timeout_sec": 0, 00:25:49.273 "reconnect_delay_sec": 0, 00:25:49.273 "fast_io_fail_timeout_sec": 0, 00:25:49.273 "disable_auto_failback": false, 00:25:49.273 "generate_uuids": false, 00:25:49.273 "transport_tos": 0, 00:25:49.273 "nvme_error_stat": false, 00:25:49.273 "rdma_srq_size": 0, 00:25:49.273 "io_path_stat": false, 00:25:49.273 "allow_accel_sequence": false, 00:25:49.273 "rdma_max_cq_size": 0, 00:25:49.273 "rdma_cm_event_timeout_ms": 0, 00:25:49.273 "dhchap_digests": [ 00:25:49.273 "sha256", 00:25:49.273 "sha384", 00:25:49.273 "sha512" 00:25:49.273 ], 00:25:49.273 "dhchap_dhgroups": [ 00:25:49.273 "null", 00:25:49.273 "ffdhe2048", 00:25:49.273 "ffdhe3072", 00:25:49.273 "ffdhe4096", 00:25:49.273 "ffdhe6144", 00:25:49.273 "ffdhe8192" 00:25:49.273 ] 00:25:49.273 } 00:25:49.273 }, 00:25:49.273 { 00:25:49.273 "method": "bdev_nvme_set_hotplug", 00:25:49.273 "params": { 00:25:49.273 "period_us": 100000, 00:25:49.273 "enable": false 00:25:49.273 } 00:25:49.273 }, 00:25:49.273 { 00:25:49.273 "method": "bdev_malloc_create", 00:25:49.273 "params": { 00:25:49.273 "name": "malloc0", 00:25:49.273 "num_blocks": 8192, 00:25:49.273 "block_size": 4096, 00:25:49.273 "physical_block_size": 4096, 00:25:49.273 "uuid": "c8861cd4-7404-413d-9a5e-fe0d57208bd5", 00:25:49.273 "optimal_io_boundary": 0, 00:25:49.273 "md_size": 0, 00:25:49.273 "dif_type": 0, 00:25:49.273 "dif_is_head_of_md": false, 00:25:49.273 "dif_pi_format": 0 00:25:49.273 } 00:25:49.273 }, 00:25:49.273 { 00:25:49.273 "method": "bdev_wait_for_examine" 00:25:49.273 } 00:25:49.273 ] 00:25:49.273 }, 00:25:49.273 { 00:25:49.273 "subsystem": "nbd", 00:25:49.273 "config": [] 00:25:49.273 }, 00:25:49.273 { 00:25:49.273 "subsystem": "scheduler", 00:25:49.273 "config": [ 00:25:49.273 { 00:25:49.273 "method": "framework_set_scheduler", 00:25:49.273 "params": { 00:25:49.273 "name": "static" 00:25:49.273 } 00:25:49.273 } 00:25:49.273 ] 00:25:49.273 }, 00:25:49.273 { 00:25:49.273 "subsystem": "nvmf", 00:25:49.273 "config": [ 00:25:49.273 { 00:25:49.273 "method": "nvmf_set_config", 00:25:49.273 "params": { 00:25:49.273 "discovery_filter": "match_any", 00:25:49.273 "admin_cmd_passthru": { 00:25:49.273 "identify_ctrlr": false 00:25:49.273 }, 00:25:49.273 "dhchap_digests": [ 00:25:49.273 "sha256", 00:25:49.273 "sha384", 00:25:49.273 "sha512" 00:25:49.273 ], 00:25:49.273 "dhchap_dhgroups": [ 00:25:49.273 "null", 00:25:49.273 "ffdhe2048", 00:25:49.273 "ffdhe3072", 00:25:49.273 "ffdhe4096", 00:25:49.273 "ffdhe6144", 00:25:49.273 "ffdhe8192" 00:25:49.273 ] 00:25:49.273 } 00:25:49.273 }, 00:25:49.273 { 00:25:49.273 "method": "nvmf_set_max_subsystems", 00:25:49.273 "params": { 00:25:49.273 "max_subsystems": 1024 00:25:49.273 } 00:25:49.273 }, 00:25:49.273 { 00:25:49.273 "method": "nvmf_set_crdt", 00:25:49.273 "params": { 00:25:49.273 "crdt1": 0, 00:25:49.273 "crdt2": 0, 00:25:49.273 "crdt3": 0 00:25:49.273 } 00:25:49.273 }, 00:25:49.273 { 00:25:49.273 "method": "nvmf_create_transport", 00:25:49.273 "params": { 00:25:49.273 "trtype": "TCP", 00:25:49.273 "max_queue_depth": 128, 00:25:49.273 "max_io_qpairs_per_ctrlr": 127, 00:25:49.273 "in_capsule_data_size": 4096, 00:25:49.273 "max_io_size": 131072, 00:25:49.273 "io_unit_size": 131072, 00:25:49.273 "max_aq_depth": 128, 00:25:49.273 "num_shared_buffers": 511, 00:25:49.273 "buf_cache_size": 4294967295, 00:25:49.273 "dif_insert_or_strip": false, 00:25:49.273 "zcopy": false, 00:25:49.273 "c2h_success": false, 00:25:49.273 "sock_priority": 0, 00:25:49.273 "abort_timeout_sec": 1, 00:25:49.273 "ack_timeout": 0, 00:25:49.273 "data_wr_pool_size": 0 00:25:49.273 } 00:25:49.273 }, 00:25:49.273 { 00:25:49.273 "method": "nvmf_create_subsystem", 00:25:49.273 "params": { 00:25:49.273 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:49.273 "allow_any_host": false, 00:25:49.273 "serial_number": "00000000000000000000", 00:25:49.273 "model_number": "SPDK bdev Controller", 00:25:49.273 "max_namespaces": 32, 00:25:49.273 "min_cntlid": 1, 00:25:49.273 "max_cntlid": 65519, 00:25:49.273 "ana_reporting": false 00:25:49.273 } 00:25:49.273 }, 00:25:49.273 { 00:25:49.273 "method": "nvmf_subsystem_add_host", 00:25:49.273 "params": { 00:25:49.273 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:49.273 "host": "nqn.2016-06.io.spdk:host1", 00:25:49.273 "psk": "key0" 00:25:49.273 } 00:25:49.273 }, 00:25:49.273 { 00:25:49.273 "method": "nvmf_subsystem_add_ns", 00:25:49.273 "params": { 00:25:49.273 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:49.273 "namespace": { 00:25:49.273 "nsid": 1, 00:25:49.273 "bdev_name": "malloc0", 00:25:49.273 "nguid": "C8861CD47404413D9A5EFE0D57208BD5", 00:25:49.273 "uuid": "c8861cd4-7404-413d-9a5e-fe0d57208bd5", 00:25:49.273 "no_auto_visible": false 00:25:49.273 } 00:25:49.273 } 00:25:49.273 }, 00:25:49.273 { 00:25:49.273 "method": "nvmf_subsystem_add_listener", 00:25:49.273 "params": { 00:25:49.273 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:49.273 "listen_address": { 00:25:49.273 "trtype": "TCP", 00:25:49.273 "adrfam": "IPv4", 00:25:49.273 "traddr": "10.0.0.2", 00:25:49.274 "trsvcid": "4420" 00:25:49.274 }, 00:25:49.274 "secure_channel": false, 00:25:49.274 "sock_impl": "ssl" 00:25:49.274 } 00:25:49.274 } 00:25:49.274 ] 00:25:49.274 } 00:25:49.274 ] 00:25:49.274 }' 00:25:49.274 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:49.274 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:49.274 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:25:49.274 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3224889 00:25:49.274 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3224889 00:25:49.274 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3224889 ']' 00:25:49.274 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.274 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:49.274 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.274 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:49.274 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:49.534 [2024-10-28 15:21:36.154664] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:25:49.534 [2024-10-28 15:21:36.154775] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.534 [2024-10-28 15:21:36.292778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.792 [2024-10-28 15:21:36.406446] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.792 [2024-10-28 15:21:36.406557] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.792 [2024-10-28 15:21:36.406608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:49.792 [2024-10-28 15:21:36.406641] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:49.792 [2024-10-28 15:21:36.406688] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.792 [2024-10-28 15:21:36.408179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.052 [2024-10-28 15:21:36.737684] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.052 [2024-10-28 15:21:36.770640] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:50.052 [2024-10-28 15:21:36.770925] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.052 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:50.052 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:50.052 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:50.052 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:50.052 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:50.052 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.052 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3225008 00:25:50.052 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3225008 /var/tmp/bdevperf.sock 00:25:50.052 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3225008 ']' 00:25:50.052 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:50.052 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:50.052 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:50.052 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:25:50.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:50.052 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:50.052 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:50.052 15:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:25:50.052 "subsystems": [ 00:25:50.052 { 00:25:50.052 "subsystem": "keyring", 00:25:50.052 "config": [ 00:25:50.052 { 00:25:50.052 "method": "keyring_file_add_key", 00:25:50.052 "params": { 00:25:50.052 "name": "key0", 00:25:50.052 "path": "/tmp/tmp.5mD0rT7vYi" 00:25:50.052 } 00:25:50.052 } 00:25:50.052 ] 00:25:50.052 }, 00:25:50.052 { 00:25:50.052 "subsystem": "iobuf", 00:25:50.052 "config": [ 00:25:50.052 { 00:25:50.052 "method": "iobuf_set_options", 00:25:50.052 "params": { 00:25:50.052 "small_pool_count": 8192, 00:25:50.052 "large_pool_count": 1024, 00:25:50.052 "small_bufsize": 8192, 00:25:50.052 "large_bufsize": 135168, 00:25:50.052 "enable_numa": false 00:25:50.052 } 00:25:50.052 } 00:25:50.052 ] 00:25:50.052 }, 00:25:50.052 { 00:25:50.052 "subsystem": "sock", 00:25:50.052 "config": [ 00:25:50.052 { 00:25:50.052 "method": "sock_set_default_impl", 00:25:50.052 "params": { 00:25:50.052 "impl_name": "posix" 00:25:50.052 } 00:25:50.052 }, 00:25:50.052 { 00:25:50.052 "method": "sock_impl_set_options", 00:25:50.052 "params": { 00:25:50.052 "impl_name": "ssl", 00:25:50.052 "recv_buf_size": 4096, 00:25:50.052 "send_buf_size": 4096, 00:25:50.052 "enable_recv_pipe": true, 00:25:50.052 "enable_quickack": false, 00:25:50.052 "enable_placement_id": 0, 00:25:50.052 "enable_zerocopy_send_server": true, 00:25:50.052 "enable_zerocopy_send_client": false, 00:25:50.052 "zerocopy_threshold": 0, 00:25:50.052 "tls_version": 0, 00:25:50.052 "enable_ktls": false 00:25:50.052 } 00:25:50.052 }, 00:25:50.052 { 00:25:50.052 "method": "sock_impl_set_options", 00:25:50.052 "params": { 00:25:50.052 "impl_name": "posix", 00:25:50.052 "recv_buf_size": 2097152, 00:25:50.052 "send_buf_size": 2097152, 00:25:50.052 "enable_recv_pipe": true, 00:25:50.052 "enable_quickack": false, 00:25:50.052 "enable_placement_id": 0, 00:25:50.052 "enable_zerocopy_send_server": true, 00:25:50.052 "enable_zerocopy_send_client": false, 00:25:50.052 "zerocopy_threshold": 0, 00:25:50.052 "tls_version": 0, 00:25:50.052 "enable_ktls": false 00:25:50.052 } 00:25:50.052 } 00:25:50.052 ] 00:25:50.052 }, 00:25:50.052 { 00:25:50.052 "subsystem": "vmd", 00:25:50.052 "config": [] 00:25:50.052 }, 00:25:50.052 { 00:25:50.052 "subsystem": "accel", 00:25:50.052 "config": [ 00:25:50.052 { 00:25:50.052 "method": "accel_set_options", 00:25:50.052 "params": { 00:25:50.052 "small_cache_size": 128, 00:25:50.052 "large_cache_size": 16, 00:25:50.052 "task_count": 2048, 00:25:50.052 "sequence_count": 2048, 00:25:50.052 "buf_count": 2048 00:25:50.052 } 00:25:50.052 } 00:25:50.052 ] 00:25:50.052 }, 00:25:50.052 { 00:25:50.052 "subsystem": "bdev", 00:25:50.052 "config": [ 00:25:50.052 { 00:25:50.052 "method": "bdev_set_options", 00:25:50.052 "params": { 00:25:50.052 "bdev_io_pool_size": 65535, 00:25:50.052 "bdev_io_cache_size": 256, 00:25:50.052 "bdev_auto_examine": true, 00:25:50.052 "iobuf_small_cache_size": 128, 00:25:50.052 "iobuf_large_cache_size": 16 00:25:50.052 } 00:25:50.052 }, 00:25:50.052 { 00:25:50.052 "method": "bdev_raid_set_options", 00:25:50.052 "params": { 00:25:50.052 "process_window_size_kb": 1024, 00:25:50.053 "process_max_bandwidth_mb_sec": 0 00:25:50.053 } 00:25:50.053 }, 00:25:50.053 { 00:25:50.053 "method": "bdev_iscsi_set_options", 00:25:50.053 "params": { 00:25:50.053 "timeout_sec": 30 00:25:50.053 } 00:25:50.053 }, 00:25:50.053 { 00:25:50.053 "method": "bdev_nvme_set_options", 00:25:50.053 "params": { 00:25:50.053 "action_on_timeout": "none", 00:25:50.053 "timeout_us": 0, 00:25:50.053 "timeout_admin_us": 0, 00:25:50.053 "keep_alive_timeout_ms": 10000, 00:25:50.053 "arbitration_burst": 0, 00:25:50.053 "low_priority_weight": 0, 00:25:50.053 "medium_priority_weight": 0, 00:25:50.053 "high_priority_weight": 0, 00:25:50.053 "nvme_adminq_poll_period_us": 10000, 00:25:50.053 "nvme_ioq_poll_period_us": 0, 00:25:50.053 "io_queue_requests": 512, 00:25:50.053 "delay_cmd_submit": true, 00:25:50.053 "transport_retry_count": 4, 00:25:50.053 "bdev_retry_count": 3, 00:25:50.053 "transport_ack_timeout": 0, 00:25:50.053 "ctrlr_loss_timeout_sec": 0, 00:25:50.053 "reconnect_delay_sec": 0, 00:25:50.053 "fast_io_fail_timeout_sec": 0, 00:25:50.053 "disable_auto_failback": false, 00:25:50.053 "generate_uuids": false, 00:25:50.053 "transport_tos": 0, 00:25:50.053 "nvme_error_stat": false, 00:25:50.053 "rdma_srq_size": 0, 00:25:50.053 "io_path_stat": false, 00:25:50.053 "allow_accel_sequence": false, 00:25:50.053 "rdma_max_cq_size": 0, 00:25:50.053 "rdma_cm_event_timeout_ms": 0, 00:25:50.053 "dhchap_digests": [ 00:25:50.053 "sha256", 00:25:50.053 "sha384", 00:25:50.053 "sha512" 00:25:50.053 ], 00:25:50.053 "dhchap_dhgroups": [ 00:25:50.053 "null", 00:25:50.053 "ffdhe2048", 00:25:50.053 "ffdhe3072", 00:25:50.053 "ffdhe4096", 00:25:50.053 "ffdhe6144", 00:25:50.053 "ffdhe8192" 00:25:50.053 ] 00:25:50.053 } 00:25:50.053 }, 00:25:50.053 { 00:25:50.053 "method": "bdev_nvme_attach_controller", 00:25:50.053 "params": { 00:25:50.053 "name": "nvme0", 00:25:50.053 "trtype": "TCP", 00:25:50.053 "adrfam": "IPv4", 00:25:50.053 "traddr": "10.0.0.2", 00:25:50.053 "trsvcid": "4420", 00:25:50.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:50.053 "prchk_reftag": false, 00:25:50.053 "prchk_guard": false, 00:25:50.053 "ctrlr_loss_timeout_sec": 0, 00:25:50.053 "reconnect_delay_sec": 0, 00:25:50.053 "fast_io_fail_timeout_sec": 0, 00:25:50.053 "psk": "key0", 00:25:50.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:50.053 "hdgst": false, 00:25:50.053 "ddgst": false, 00:25:50.053 "multipath": "multipath" 00:25:50.053 } 00:25:50.053 }, 00:25:50.053 { 00:25:50.053 "method": "bdev_nvme_set_hotplug", 00:25:50.053 "params": { 00:25:50.053 "period_us": 100000, 00:25:50.053 "enable": false 00:25:50.053 } 00:25:50.053 }, 00:25:50.053 { 00:25:50.053 "method": "bdev_enable_histogram", 00:25:50.053 "params": { 00:25:50.053 "name": "nvme0n1", 00:25:50.053 "enable": true 00:25:50.053 } 00:25:50.053 }, 00:25:50.053 { 00:25:50.053 "method": "bdev_wait_for_examine" 00:25:50.053 } 00:25:50.053 ] 00:25:50.053 }, 00:25:50.053 { 00:25:50.053 "subsystem": "nbd", 00:25:50.053 "config": [] 00:25:50.053 } 00:25:50.053 ] 00:25:50.053 }' 00:25:50.053 [2024-10-28 15:21:36.894865] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:25:50.053 [2024-10-28 15:21:36.894980] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3225008 ] 00:25:50.313 [2024-10-28 15:21:37.025974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.313 [2024-10-28 15:21:37.141250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.573 [2024-10-28 15:21:37.397834] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:51.510 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:51.510 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:51.510 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:51.510 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:25:51.769 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.769 15:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:52.027 Running I/O for 1 seconds... 00:25:52.967 1546.00 IOPS, 6.04 MiB/s 00:25:52.967 Latency(us) 00:25:52.967 [2024-10-28T14:21:39.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:52.967 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:52.967 Verification LBA range: start 0x0 length 0x2000 00:25:52.967 nvme0n1 : 1.04 1606.41 6.28 0.00 0.00 78451.46 14951.92 59030.95 00:25:52.967 [2024-10-28T14:21:39.834Z] =================================================================================================================== 00:25:52.967 [2024-10-28T14:21:39.834Z] Total : 1606.41 6.28 0.00 0.00 78451.46 14951.92 59030.95 00:25:52.967 { 00:25:52.967 "results": [ 00:25:52.967 { 00:25:52.967 "job": "nvme0n1", 00:25:52.967 "core_mask": "0x2", 00:25:52.967 "workload": "verify", 00:25:52.967 "status": "finished", 00:25:52.967 "verify_range": { 00:25:52.967 "start": 0, 00:25:52.967 "length": 8192 00:25:52.967 }, 00:25:52.967 "queue_depth": 128, 00:25:52.967 "io_size": 4096, 00:25:52.967 "runtime": 1.042075, 00:25:52.967 "iops": 1606.4102871674304, 00:25:52.967 "mibps": 6.275040184247775, 00:25:52.967 "io_failed": 0, 00:25:52.967 "io_timeout": 0, 00:25:52.967 "avg_latency_us": 78451.4552183725, 00:25:52.967 "min_latency_us": 14951.917037037038, 00:25:52.967 "max_latency_us": 59030.945185185185 00:25:52.967 } 00:25:52.967 ], 00:25:52.967 "core_count": 1 00:25:52.967 } 00:25:53.225 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:25:53.225 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:25:53.225 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:25:53.225 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:25:53.225 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:25:53.225 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:25:53.225 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:53.225 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:25:53.225 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:25:53.225 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:25:53.225 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:53.225 nvmf_trace.0 00:25:53.225 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:25:53.225 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3225008 00:25:53.225 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3225008 ']' 00:25:53.225 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3225008 00:25:53.225 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:53.225 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:53.225 15:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3225008 00:25:53.225 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:53.225 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:53.225 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3225008' 00:25:53.225 killing process with pid 3225008 00:25:53.225 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3225008 00:25:53.225 Received shutdown signal, test time was about 1.000000 seconds 00:25:53.225 00:25:53.225 Latency(us) 00:25:53.225 [2024-10-28T14:21:40.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.225 [2024-10-28T14:21:40.092Z] =================================================================================================================== 00:25:53.225 [2024-10-28T14:21:40.092Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:53.225 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3225008 00:25:53.792 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:25:53.792 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:53.792 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:25:53.792 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:53.792 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:25:53.792 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:53.792 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:53.792 rmmod nvme_tcp 00:25:53.792 rmmod nvme_fabrics 00:25:53.792 rmmod nvme_keyring 00:25:53.792 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:53.792 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:25:53.792 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:25:53.792 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3224889 ']' 00:25:53.792 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3224889 00:25:53.792 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3224889 ']' 00:25:53.792 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3224889 00:25:53.792 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:53.792 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:53.792 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3224889 00:25:53.792 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:53.792 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:53.792 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3224889' 00:25:53.792 killing process with pid 3224889 00:25:53.792 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3224889 00:25:53.792 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3224889 00:25:54.050 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:54.050 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:54.050 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:54.050 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:25:54.050 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:25:54.050 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:54.050 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:25:54.050 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:54.050 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:54.050 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.050 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:54.050 15:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.592 15:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:56.592 15:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.iqI9e4hULz /tmp/tmp.hpMpjg35m9 /tmp/tmp.5mD0rT7vYi 00:25:56.592 00:25:56.592 real 1m45.234s 00:25:56.592 user 3m2.999s 00:25:56.592 sys 0m31.012s 00:25:56.592 15:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:56.592 15:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:56.592 ************************************ 00:25:56.592 END TEST nvmf_tls 00:25:56.592 ************************************ 00:25:56.592 15:21:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:56.592 15:21:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:56.592 15:21:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:56.592 15:21:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:56.592 ************************************ 00:25:56.592 START TEST nvmf_fips 00:25:56.592 ************************************ 00:25:56.592 15:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:56.592 * Looking for test storage... 00:25:56.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1689 -- # lcov --version 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:25:56.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.592 --rc genhtml_branch_coverage=1 00:25:56.592 --rc genhtml_function_coverage=1 00:25:56.592 --rc genhtml_legend=1 00:25:56.592 --rc geninfo_all_blocks=1 00:25:56.592 --rc geninfo_unexecuted_blocks=1 00:25:56.592 00:25:56.592 ' 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:25:56.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.592 --rc genhtml_branch_coverage=1 00:25:56.592 --rc genhtml_function_coverage=1 00:25:56.592 --rc genhtml_legend=1 00:25:56.592 --rc geninfo_all_blocks=1 00:25:56.592 --rc geninfo_unexecuted_blocks=1 00:25:56.592 00:25:56.592 ' 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:25:56.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.592 --rc genhtml_branch_coverage=1 00:25:56.592 --rc genhtml_function_coverage=1 00:25:56.592 --rc genhtml_legend=1 00:25:56.592 --rc geninfo_all_blocks=1 00:25:56.592 --rc geninfo_unexecuted_blocks=1 00:25:56.592 00:25:56.592 ' 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:25:56.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.592 --rc genhtml_branch_coverage=1 00:25:56.592 --rc genhtml_function_coverage=1 00:25:56.592 --rc genhtml_legend=1 00:25:56.592 --rc geninfo_all_blocks=1 00:25:56.592 --rc geninfo_unexecuted_blocks=1 00:25:56.592 00:25:56.592 ' 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.592 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:56.593 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:25:56.593 Error setting digest 00:25:56.593 4072A3F1AD7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:25:56.593 4072A3F1AD7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.593 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:56.594 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:56.594 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:25:56.594 15:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:59.912 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:59.912 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:59.912 Found net devices under 0000:84:00.0: cvl_0_0 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:59.912 Found net devices under 0000:84:00.1: cvl_0_1 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:59.912 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:59.913 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:59.913 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:59.913 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:59.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:59.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:25:59.913 00:25:59.913 --- 10.0.0.2 ping statistics --- 00:25:59.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.913 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:25:59.913 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:59.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:59.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:25:59.913 00:25:59.913 --- 10.0.0.1 ping statistics --- 00:25:59.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.913 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:25:59.913 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:59.913 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:25:59.913 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:59.913 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:59.913 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:59.913 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:59.913 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:59.913 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:59.913 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:59.913 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:25:59.913 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:59.913 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:59.913 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:59.913 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3227544 00:25:59.913 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:59.913 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3227544 00:25:59.913 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3227544 ']' 00:25:59.913 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.913 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:59.913 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.913 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:59.913 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:59.913 [2024-10-28 15:21:46.449232] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:25:59.913 [2024-10-28 15:21:46.449307] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:59.913 [2024-10-28 15:21:46.521708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.913 [2024-10-28 15:21:46.577235] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:59.913 [2024-10-28 15:21:46.577295] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:59.913 [2024-10-28 15:21:46.577309] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:59.913 [2024-10-28 15:21:46.577319] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:59.913 [2024-10-28 15:21:46.577329] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:59.913 [2024-10-28 15:21:46.577986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.913 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:59.913 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:25:59.913 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:59.913 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:59.913 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:00.173 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:00.173 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:26:00.173 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:26:00.173 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:26:00.173 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.CZy 00:26:00.173 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:26:00.173 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.CZy 00:26:00.173 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.CZy 00:26:00.173 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.CZy 00:26:00.174 15:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:00.745 [2024-10-28 15:21:47.435252] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:00.745 [2024-10-28 15:21:47.452138] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:00.745 [2024-10-28 15:21:47.452558] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:00.745 malloc0 00:26:00.745 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:00.745 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3227695 00:26:00.745 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:00.745 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3227695 /var/tmp/bdevperf.sock 00:26:00.745 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3227695 ']' 00:26:00.745 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:00.745 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:00.745 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:00.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:00.745 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:00.745 15:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:01.003 [2024-10-28 15:21:47.655963] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:26:01.003 [2024-10-28 15:21:47.656052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3227695 ] 00:26:01.003 [2024-10-28 15:21:47.734019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.003 [2024-10-28 15:21:47.800981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:01.569 15:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:01.569 15:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:26:01.569 15:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.CZy 00:26:01.828 15:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:02.089 [2024-10-28 15:21:48.808512] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:02.089 TLSTESTn1 00:26:02.089 15:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:02.349 Running I/O for 10 seconds... 00:26:04.669 2670.00 IOPS, 10.43 MiB/s [2024-10-28T14:21:52.479Z] 2249.50 IOPS, 8.79 MiB/s [2024-10-28T14:21:53.421Z] 1992.00 IOPS, 7.78 MiB/s [2024-10-28T14:21:54.361Z] 1859.25 IOPS, 7.26 MiB/s [2024-10-28T14:21:55.300Z] 1781.80 IOPS, 6.96 MiB/s [2024-10-28T14:21:56.239Z] 1732.67 IOPS, 6.77 MiB/s [2024-10-28T14:21:57.619Z] 1697.86 IOPS, 6.63 MiB/s [2024-10-28T14:21:58.556Z] 1669.38 IOPS, 6.52 MiB/s [2024-10-28T14:21:59.489Z] 1648.00 IOPS, 6.44 MiB/s [2024-10-28T14:21:59.489Z] 1680.90 IOPS, 6.57 MiB/s 00:26:12.622 Latency(us) 00:26:12.622 [2024-10-28T14:21:59.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.622 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:12.622 Verification LBA range: start 0x0 length 0x2000 00:26:12.622 TLSTESTn1 : 10.03 1688.81 6.60 0.00 0.00 75641.39 9417.77 58254.22 00:26:12.622 [2024-10-28T14:21:59.489Z] =================================================================================================================== 00:26:12.622 [2024-10-28T14:21:59.489Z] Total : 1688.81 6.60 0.00 0.00 75641.39 9417.77 58254.22 00:26:12.622 { 00:26:12.622 "results": [ 00:26:12.622 { 00:26:12.622 "job": "TLSTESTn1", 00:26:12.622 "core_mask": "0x4", 00:26:12.622 "workload": "verify", 00:26:12.622 "status": "finished", 00:26:12.622 "verify_range": { 00:26:12.622 "start": 0, 00:26:12.622 "length": 8192 00:26:12.622 }, 00:26:12.622 "queue_depth": 128, 00:26:12.622 "io_size": 4096, 00:26:12.622 "runtime": 10.025412, 00:26:12.622 "iops": 1688.808400093682, 00:26:12.622 "mibps": 6.596907812865945, 00:26:12.622 "io_failed": 0, 00:26:12.622 "io_timeout": 0, 00:26:12.622 "avg_latency_us": 75641.38520644796, 00:26:12.622 "min_latency_us": 9417.765925925925, 00:26:12.622 "max_latency_us": 58254.22222222222 00:26:12.622 } 00:26:12.622 ], 00:26:12.622 "core_count": 1 00:26:12.622 } 00:26:12.622 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:26:12.622 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:26:12.622 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:26:12.622 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:26:12.622 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:26:12.622 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:26:12.622 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:26:12.622 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:26:12.622 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:26:12.622 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:26:12.622 nvmf_trace.0 00:26:12.622 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:26:12.622 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3227695 00:26:12.623 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3227695 ']' 00:26:12.623 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3227695 00:26:12.623 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:26:12.623 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:12.623 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3227695 00:26:12.623 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:12.623 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:12.623 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3227695' 00:26:12.623 killing process with pid 3227695 00:26:12.623 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3227695 00:26:12.623 Received shutdown signal, test time was about 10.000000 seconds 00:26:12.623 00:26:12.623 Latency(us) 00:26:12.623 [2024-10-28T14:21:59.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.623 [2024-10-28T14:21:59.490Z] =================================================================================================================== 00:26:12.623 [2024-10-28T14:21:59.490Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:12.623 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3227695 00:26:12.881 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:26:12.881 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:12.881 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:26:12.881 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:12.881 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:26:12.881 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:12.881 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:12.881 rmmod nvme_tcp 00:26:13.139 rmmod nvme_fabrics 00:26:13.139 rmmod nvme_keyring 00:26:13.139 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:13.139 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:26:13.139 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:26:13.139 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3227544 ']' 00:26:13.139 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3227544 00:26:13.139 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3227544 ']' 00:26:13.139 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3227544 00:26:13.139 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:26:13.139 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:13.139 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3227544 00:26:13.139 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:13.139 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:13.139 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3227544' 00:26:13.139 killing process with pid 3227544 00:26:13.139 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3227544 00:26:13.139 15:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3227544 00:26:13.398 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:13.398 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:13.398 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:13.398 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:26:13.398 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:26:13.398 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:13.398 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:26:13.398 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:13.398 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:13.398 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.398 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:13.398 15:22:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.929 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:15.929 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.CZy 00:26:15.929 00:26:15.929 real 0m19.288s 00:26:15.929 user 0m25.507s 00:26:15.929 sys 0m6.550s 00:26:15.929 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:15.929 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:15.929 ************************************ 00:26:15.929 END TEST nvmf_fips 00:26:15.929 ************************************ 00:26:15.929 15:22:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:26:15.929 15:22:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:15.929 15:22:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:15.929 15:22:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:15.929 ************************************ 00:26:15.929 START TEST nvmf_control_msg_list 00:26:15.929 ************************************ 00:26:15.929 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:26:15.929 * Looking for test storage... 00:26:15.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:15.929 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:26:15.929 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1689 -- # lcov --version 00:26:15.929 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:26:15.929 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:26:15.929 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:15.929 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:15.929 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:15.929 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:26:15.929 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:26:15.929 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:26:15.929 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:26:15.929 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:26:15.929 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:26:15.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.930 --rc genhtml_branch_coverage=1 00:26:15.930 --rc genhtml_function_coverage=1 00:26:15.930 --rc genhtml_legend=1 00:26:15.930 --rc geninfo_all_blocks=1 00:26:15.930 --rc geninfo_unexecuted_blocks=1 00:26:15.930 00:26:15.930 ' 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:26:15.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.930 --rc genhtml_branch_coverage=1 00:26:15.930 --rc genhtml_function_coverage=1 00:26:15.930 --rc genhtml_legend=1 00:26:15.930 --rc geninfo_all_blocks=1 00:26:15.930 --rc geninfo_unexecuted_blocks=1 00:26:15.930 00:26:15.930 ' 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:26:15.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.930 --rc genhtml_branch_coverage=1 00:26:15.930 --rc genhtml_function_coverage=1 00:26:15.930 --rc genhtml_legend=1 00:26:15.930 --rc geninfo_all_blocks=1 00:26:15.930 --rc geninfo_unexecuted_blocks=1 00:26:15.930 00:26:15.930 ' 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:26:15.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.930 --rc genhtml_branch_coverage=1 00:26:15.930 --rc genhtml_function_coverage=1 00:26:15.930 --rc genhtml_legend=1 00:26:15.930 --rc geninfo_all_blocks=1 00:26:15.930 --rc geninfo_unexecuted_blocks=1 00:26:15.930 00:26:15.930 ' 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:15.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:26:15.930 15:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:18.467 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:18.467 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.467 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:18.468 Found net devices under 0000:84:00.0: cvl_0_0 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:18.468 Found net devices under 0000:84:00.1: cvl_0_1 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:18.468 15:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:18.468 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:18.468 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:18.468 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:18.468 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:18.468 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:18.468 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:18.468 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:18.468 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:18.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:18.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:26:18.468 00:26:18.468 --- 10.0.0.2 ping statistics --- 00:26:18.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.468 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:26:18.468 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:18.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:18.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:26:18.468 00:26:18.468 --- 10.0.0.1 ping statistics --- 00:26:18.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.468 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:26:18.468 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:18.468 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:26:18.468 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:18.468 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:18.468 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:18.468 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:18.468 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:18.468 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:18.468 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:18.468 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:26:18.468 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:18.468 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:18.468 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:18.468 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3231106 00:26:18.468 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:18.468 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3231106 00:26:18.468 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 3231106 ']' 00:26:18.468 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.468 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:18.468 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.468 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:18.468 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:18.468 [2024-10-28 15:22:05.290436] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:26:18.468 [2024-10-28 15:22:05.290613] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:18.730 [2024-10-28 15:22:05.474767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.730 [2024-10-28 15:22:05.592530] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:18.730 [2024-10-28 15:22:05.592637] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:18.730 [2024-10-28 15:22:05.592709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:18.730 [2024-10-28 15:22:05.592767] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:18.730 [2024-10-28 15:22:05.592815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:18.730 [2024-10-28 15:22:05.594317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.019 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:19.019 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:26:19.019 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:19.019 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:19.019 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:19.318 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:19.318 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:26:19.318 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:19.318 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:26:19.318 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.318 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:19.318 [2024-10-28 15:22:05.920255] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:19.318 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.318 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:26:19.318 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.318 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:19.318 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.318 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:26:19.318 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.318 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:19.318 Malloc0 00:26:19.318 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.318 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:26:19.318 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.318 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:19.318 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.318 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:19.318 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.318 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:19.318 [2024-10-28 15:22:05.967604] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:19.318 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.318 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3231197 00:26:19.318 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:19.318 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3231199 00:26:19.318 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:19.318 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3231201 00:26:19.318 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:19.318 15:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3231197 00:26:19.318 [2024-10-28 15:22:06.055975] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:19.318 [2024-10-28 15:22:06.076184] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:19.318 [2024-10-28 15:22:06.077665] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:20.305 Initializing NVMe Controllers 00:26:20.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:26:20.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:26:20.305 Initialization complete. Launching workers. 00:26:20.305 ======================================================== 00:26:20.305 Latency(us) 00:26:20.305 Device Information : IOPS MiB/s Average min max 00:26:20.305 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3276.00 12.80 304.59 176.95 694.51 00:26:20.305 ======================================================== 00:26:20.305 Total : 3276.00 12.80 304.59 176.95 694.51 00:26:20.305 00:26:20.305 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3231199 00:26:20.564 Initializing NVMe Controllers 00:26:20.564 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:26:20.564 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:26:20.564 Initialization complete. Launching workers. 00:26:20.564 ======================================================== 00:26:20.564 Latency(us) 00:26:20.564 Device Information : IOPS MiB/s Average min max 00:26:20.564 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 2681.00 10.47 372.23 159.10 804.92 00:26:20.564 ======================================================== 00:26:20.564 Total : 2681.00 10.47 372.23 159.10 804.92 00:26:20.564 00:26:20.564 Initializing NVMe Controllers 00:26:20.564 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:26:20.564 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:26:20.565 Initialization complete. Launching workers. 00:26:20.565 ======================================================== 00:26:20.565 Latency(us) 00:26:20.565 Device Information : IOPS MiB/s Average min max 00:26:20.565 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 2688.00 10.50 371.37 144.76 669.57 00:26:20.565 ======================================================== 00:26:20.565 Total : 2688.00 10.50 371.37 144.76 669.57 00:26:20.565 00:26:20.565 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3231201 00:26:20.565 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:26:20.565 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:26:20.565 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:20.565 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:26:20.565 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:20.565 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:26:20.565 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:20.565 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:20.565 rmmod nvme_tcp 00:26:20.565 rmmod nvme_fabrics 00:26:20.565 rmmod nvme_keyring 00:26:20.565 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:20.565 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:26:20.565 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:26:20.565 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3231106 ']' 00:26:20.565 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3231106 00:26:20.565 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 3231106 ']' 00:26:20.565 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 3231106 00:26:20.565 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:26:20.565 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:20.565 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3231106 00:26:20.565 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:20.565 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:20.565 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3231106' 00:26:20.565 killing process with pid 3231106 00:26:20.565 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 3231106 00:26:20.565 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 3231106 00:26:21.136 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:21.136 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:21.136 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:21.136 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:26:21.136 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:26:21.136 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:21.137 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:26:21.137 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:21.137 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:21.137 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.137 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:21.137 15:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.045 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:23.045 00:26:23.045 real 0m7.438s 00:26:23.045 user 0m6.472s 00:26:23.045 sys 0m3.430s 00:26:23.045 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:23.045 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:23.045 ************************************ 00:26:23.045 END TEST nvmf_control_msg_list 00:26:23.045 ************************************ 00:26:23.045 15:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:26:23.045 15:22:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:23.045 15:22:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:23.045 15:22:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:23.045 ************************************ 00:26:23.045 START TEST nvmf_wait_for_buf 00:26:23.045 ************************************ 00:26:23.045 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:26:23.045 * Looking for test storage... 00:26:23.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:23.306 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:26:23.306 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1689 -- # lcov --version 00:26:23.306 15:22:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:26:23.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.306 --rc genhtml_branch_coverage=1 00:26:23.306 --rc genhtml_function_coverage=1 00:26:23.306 --rc genhtml_legend=1 00:26:23.306 --rc geninfo_all_blocks=1 00:26:23.306 --rc geninfo_unexecuted_blocks=1 00:26:23.306 00:26:23.306 ' 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:26:23.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.306 --rc genhtml_branch_coverage=1 00:26:23.306 --rc genhtml_function_coverage=1 00:26:23.306 --rc genhtml_legend=1 00:26:23.306 --rc geninfo_all_blocks=1 00:26:23.306 --rc geninfo_unexecuted_blocks=1 00:26:23.306 00:26:23.306 ' 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:26:23.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.306 --rc genhtml_branch_coverage=1 00:26:23.306 --rc genhtml_function_coverage=1 00:26:23.306 --rc genhtml_legend=1 00:26:23.306 --rc geninfo_all_blocks=1 00:26:23.306 --rc geninfo_unexecuted_blocks=1 00:26:23.306 00:26:23.306 ' 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:26:23.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.306 --rc genhtml_branch_coverage=1 00:26:23.306 --rc genhtml_function_coverage=1 00:26:23.306 --rc genhtml_legend=1 00:26:23.306 --rc geninfo_all_blocks=1 00:26:23.306 --rc geninfo_unexecuted_blocks=1 00:26:23.306 00:26:23.306 ' 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:23.306 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:23.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:23.307 15:22:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:26.599 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:26.599 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:26.599 Found net devices under 0000:84:00.0: cvl_0_0 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:26.599 Found net devices under 0000:84:00.1: cvl_0_1 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:26.599 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:26.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:26.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:26:26.600 00:26:26.600 --- 10.0.0.2 ping statistics --- 00:26:26.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.600 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:26.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:26.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:26:26.600 00:26:26.600 --- 10.0.0.1 ping statistics --- 00:26:26.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.600 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3233416 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3233416 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 3233416 ']' 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:26.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:26.600 15:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:26.600 [2024-10-28 15:22:13.079296] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:26:26.600 [2024-10-28 15:22:13.079395] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:26.600 [2024-10-28 15:22:13.233004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.600 [2024-10-28 15:22:13.347692] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:26.600 [2024-10-28 15:22:13.347816] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:26.600 [2024-10-28 15:22:13.347853] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:26.600 [2024-10-28 15:22:13.347885] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:26.600 [2024-10-28 15:22:13.347912] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:26.600 [2024-10-28 15:22:13.349334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.861 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:26.861 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:26:26.861 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:26.861 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:26.861 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:26.861 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:26.861 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:26:26.861 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:26.861 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:26:26.861 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.861 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:26.861 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.861 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:26:26.862 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.862 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:26.862 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.862 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:26:26.862 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.862 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:27.122 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.122 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:26:27.122 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.122 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:27.122 Malloc0 00:26:27.122 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.122 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:26:27.122 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.122 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:27.122 [2024-10-28 15:22:13.881251] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:27.122 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.122 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:26:27.122 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.122 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:27.122 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.122 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:26:27.122 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.122 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:27.122 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.122 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:27.122 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.122 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:27.122 [2024-10-28 15:22:13.909547] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:27.122 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.122 15:22:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:27.383 [2024-10-28 15:22:14.052924] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:28.765 Initializing NVMe Controllers 00:26:28.765 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:26:28.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:26:28.765 Initialization complete. Launching workers. 00:26:28.765 ======================================================== 00:26:28.766 Latency(us) 00:26:28.766 Device Information : IOPS MiB/s Average min max 00:26:28.766 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32224.93 6729.39 62867.74 00:26:28.766 ======================================================== 00:26:28.766 Total : 129.00 16.12 32224.93 6729.39 62867.74 00:26:28.766 00:26:28.766 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:26:28.766 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.766 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:26:28.766 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:28.766 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.766 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:26:28.766 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:26:28.766 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:26:28.766 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:26:28.766 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:28.766 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:26:28.766 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:28.766 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:26:28.766 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:28.766 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:28.766 rmmod nvme_tcp 00:26:28.766 rmmod nvme_fabrics 00:26:28.766 rmmod nvme_keyring 00:26:28.766 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:28.766 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:26:28.766 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:26:28.766 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3233416 ']' 00:26:28.766 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3233416 00:26:28.766 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 3233416 ']' 00:26:28.766 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 3233416 00:26:28.766 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:26:28.766 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:28.766 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3233416 00:26:29.025 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:29.025 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:29.025 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3233416' 00:26:29.025 killing process with pid 3233416 00:26:29.025 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 3233416 00:26:29.025 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 3233416 00:26:29.285 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:29.285 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:29.285 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:29.285 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:26:29.285 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:26:29.285 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:29.285 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:26:29.285 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:29.285 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:29.285 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.285 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:29.285 15:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.199 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:31.199 00:26:31.199 real 0m8.175s 00:26:31.199 user 0m4.182s 00:26:31.199 sys 0m2.846s 00:26:31.199 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:31.199 15:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:31.199 ************************************ 00:26:31.199 END TEST nvmf_wait_for_buf 00:26:31.199 ************************************ 00:26:31.199 15:22:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:26:31.199 15:22:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:26:31.199 15:22:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:26:31.199 15:22:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:26:31.199 15:22:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:26:31.199 15:22:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:34.493 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:34.493 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:34.493 Found net devices under 0000:84:00.0: cvl_0_0 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:34.493 Found net devices under 0000:84:00.1: cvl_0_1 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:34.493 ************************************ 00:26:34.493 START TEST nvmf_perf_adq 00:26:34.493 ************************************ 00:26:34.493 15:22:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:34.493 * Looking for test storage... 00:26:34.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:34.493 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:26:34.493 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1689 -- # lcov --version 00:26:34.493 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:26:34.493 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:26:34.493 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:34.493 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:34.493 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:34.493 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:26:34.493 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:26:34.493 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:26:34.493 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:26:34.493 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:26:34.493 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:26:34.493 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:26:34.493 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:34.493 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:26:34.493 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:26:34.493 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:34.493 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:34.493 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:26:34.493 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:26:34.493 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:34.493 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:26:34.493 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:26:34.493 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:26:34.493 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:26:34.493 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:34.493 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:26:34.493 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:26:34.493 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:34.493 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:26:34.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.494 --rc genhtml_branch_coverage=1 00:26:34.494 --rc genhtml_function_coverage=1 00:26:34.494 --rc genhtml_legend=1 00:26:34.494 --rc geninfo_all_blocks=1 00:26:34.494 --rc geninfo_unexecuted_blocks=1 00:26:34.494 00:26:34.494 ' 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:26:34.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.494 --rc genhtml_branch_coverage=1 00:26:34.494 --rc genhtml_function_coverage=1 00:26:34.494 --rc genhtml_legend=1 00:26:34.494 --rc geninfo_all_blocks=1 00:26:34.494 --rc geninfo_unexecuted_blocks=1 00:26:34.494 00:26:34.494 ' 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:26:34.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.494 --rc genhtml_branch_coverage=1 00:26:34.494 --rc genhtml_function_coverage=1 00:26:34.494 --rc genhtml_legend=1 00:26:34.494 --rc geninfo_all_blocks=1 00:26:34.494 --rc geninfo_unexecuted_blocks=1 00:26:34.494 00:26:34.494 ' 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:26:34.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.494 --rc genhtml_branch_coverage=1 00:26:34.494 --rc genhtml_function_coverage=1 00:26:34.494 --rc genhtml_legend=1 00:26:34.494 --rc geninfo_all_blocks=1 00:26:34.494 --rc geninfo_unexecuted_blocks=1 00:26:34.494 00:26:34.494 ' 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:34.494 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:26:34.494 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:37.792 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:37.792 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:37.792 Found net devices under 0000:84:00.0: cvl_0_0 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:37.792 Found net devices under 0000:84:00.1: cvl_0_1 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:26:37.792 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:26:38.052 15:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:26:40.595 15:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:26:45.875 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:26:45.875 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:45.875 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:45.875 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:45.875 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:45.875 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:45.875 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.875 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:45.875 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.875 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:45.875 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:45.875 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:26:45.875 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.875 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:45.875 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:26:45.875 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:45.875 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:45.875 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:45.875 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:45.876 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:45.876 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:45.876 Found net devices under 0000:84:00.0: cvl_0_0 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:45.876 Found net devices under 0000:84:00.1: cvl_0_1 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:45.876 15:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:45.876 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:45.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:45.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:26:45.876 00:26:45.876 --- 10.0.0.2 ping statistics --- 00:26:45.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.876 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:26:45.876 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:45.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:45.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:26:45.876 00:26:45.876 --- 10.0.0.1 ping statistics --- 00:26:45.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.876 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:26:45.876 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:45.876 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:26:45.876 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:45.876 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:45.876 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:45.876 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:45.876 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:45.876 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:45.876 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:45.876 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:45.876 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:45.876 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:45.876 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.876 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3238396 00:26:45.877 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:45.877 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3238396 00:26:45.877 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3238396 ']' 00:26:45.877 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:45.877 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:45.877 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:45.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:45.877 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:45.877 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.877 [2024-10-28 15:22:32.143822] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:26:45.877 [2024-10-28 15:22:32.144006] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:45.877 [2024-10-28 15:22:32.318552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:45.877 [2024-10-28 15:22:32.442406] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:45.877 [2024-10-28 15:22:32.442526] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:45.877 [2024-10-28 15:22:32.442561] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:45.877 [2024-10-28 15:22:32.442590] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:45.877 [2024-10-28 15:22:32.442615] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:45.877 [2024-10-28 15:22:32.446205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:45.877 [2024-10-28 15:22:32.446301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:45.877 [2024-10-28 15:22:32.446398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:45.877 [2024-10-28 15:22:32.446402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.877 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:45.877 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:26:45.877 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:45.877 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:45.877 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.877 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:45.877 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:26:45.877 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:45.877 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:45.877 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.877 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.877 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.877 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:45.877 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:45.877 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.877 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:46.135 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.135 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:46.135 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.135 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:46.135 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.135 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:46.135 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.135 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:46.135 [2024-10-28 15:22:32.865543] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:46.135 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.135 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:46.135 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.135 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:46.135 Malloc1 00:26:46.135 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.135 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:46.135 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.135 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:46.135 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.135 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:46.135 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.135 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:46.135 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.136 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:46.136 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.136 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:46.136 [2024-10-28 15:22:32.928727] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:46.136 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.136 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3238506 00:26:46.136 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:26:46.136 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:48.667 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:26:48.667 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.667 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:48.667 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.667 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:26:48.667 "tick_rate": 2700000000, 00:26:48.667 "poll_groups": [ 00:26:48.667 { 00:26:48.667 "name": "nvmf_tgt_poll_group_000", 00:26:48.667 "admin_qpairs": 1, 00:26:48.667 "io_qpairs": 1, 00:26:48.667 "current_admin_qpairs": 1, 00:26:48.667 "current_io_qpairs": 1, 00:26:48.667 "pending_bdev_io": 0, 00:26:48.667 "completed_nvme_io": 18821, 00:26:48.667 "transports": [ 00:26:48.667 { 00:26:48.668 "trtype": "TCP" 00:26:48.668 } 00:26:48.668 ] 00:26:48.668 }, 00:26:48.668 { 00:26:48.668 "name": "nvmf_tgt_poll_group_001", 00:26:48.668 "admin_qpairs": 0, 00:26:48.668 "io_qpairs": 1, 00:26:48.668 "current_admin_qpairs": 0, 00:26:48.668 "current_io_qpairs": 1, 00:26:48.668 "pending_bdev_io": 0, 00:26:48.668 "completed_nvme_io": 18898, 00:26:48.668 "transports": [ 00:26:48.668 { 00:26:48.668 "trtype": "TCP" 00:26:48.668 } 00:26:48.668 ] 00:26:48.668 }, 00:26:48.668 { 00:26:48.668 "name": "nvmf_tgt_poll_group_002", 00:26:48.668 "admin_qpairs": 0, 00:26:48.668 "io_qpairs": 1, 00:26:48.668 "current_admin_qpairs": 0, 00:26:48.668 "current_io_qpairs": 1, 00:26:48.668 "pending_bdev_io": 0, 00:26:48.668 "completed_nvme_io": 18935, 00:26:48.668 "transports": [ 00:26:48.668 { 00:26:48.668 "trtype": "TCP" 00:26:48.668 } 00:26:48.668 ] 00:26:48.668 }, 00:26:48.668 { 00:26:48.668 "name": "nvmf_tgt_poll_group_003", 00:26:48.668 "admin_qpairs": 0, 00:26:48.668 "io_qpairs": 1, 00:26:48.668 "current_admin_qpairs": 0, 00:26:48.668 "current_io_qpairs": 1, 00:26:48.668 "pending_bdev_io": 0, 00:26:48.668 "completed_nvme_io": 18538, 00:26:48.668 "transports": [ 00:26:48.668 { 00:26:48.668 "trtype": "TCP" 00:26:48.668 } 00:26:48.668 ] 00:26:48.668 } 00:26:48.668 ] 00:26:48.668 }' 00:26:48.668 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:48.668 15:22:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:26:48.668 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:26:48.668 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:26:48.668 15:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3238506 00:26:56.783 Initializing NVMe Controllers 00:26:56.783 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:56.783 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:56.783 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:56.783 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:56.783 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:56.783 Initialization complete. Launching workers. 00:26:56.783 ======================================================== 00:26:56.783 Latency(us) 00:26:56.783 Device Information : IOPS MiB/s Average min max 00:26:56.783 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10012.30 39.11 6393.47 2340.91 10594.25 00:26:56.783 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10208.20 39.88 6270.28 2217.28 10515.75 00:26:56.783 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10183.10 39.78 6286.92 2311.24 10631.38 00:26:56.783 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10071.40 39.34 6355.09 1896.04 11291.61 00:26:56.783 ======================================================== 00:26:56.783 Total : 40474.99 158.11 6326.04 1896.04 11291.61 00:26:56.783 00:26:56.783 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:26:56.783 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:56.784 rmmod nvme_tcp 00:26:56.784 rmmod nvme_fabrics 00:26:56.784 rmmod nvme_keyring 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3238396 ']' 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3238396 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3238396 ']' 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3238396 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3238396 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3238396' 00:26:56.784 killing process with pid 3238396 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3238396 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3238396 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:56.784 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.330 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:59.330 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:26:59.330 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:26:59.330 15:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:26:59.589 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:02.129 15:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:07.413 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:27:07.413 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:07.413 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:07.413 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:07.413 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:07.413 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:07.413 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.413 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:07.413 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:07.414 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:07.414 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:07.414 Found net devices under 0000:84:00.0: cvl_0_0 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:07.414 Found net devices under 0000:84:00.1: cvl_0_1 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:07.414 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:07.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:07.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:27:07.415 00:27:07.415 --- 10.0.0.2 ping statistics --- 00:27:07.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.415 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:07.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:07.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:27:07.415 00:27:07.415 --- 10.0.0.1 ping statistics --- 00:27:07.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.415 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:07.415 net.core.busy_poll = 1 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:07.415 net.core.busy_read = 1 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3241101 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3241101 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3241101 ']' 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:07.415 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:07.415 [2024-10-28 15:22:54.008425] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:27:07.415 [2024-10-28 15:22:54.008598] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:07.415 [2024-10-28 15:22:54.194424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:07.676 [2024-10-28 15:22:54.320673] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:07.676 [2024-10-28 15:22:54.320784] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:07.676 [2024-10-28 15:22:54.320822] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:07.676 [2024-10-28 15:22:54.320851] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:07.676 [2024-10-28 15:22:54.320892] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:07.676 [2024-10-28 15:22:54.324295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.676 [2024-10-28 15:22:54.324398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:07.676 [2024-10-28 15:22:54.324491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:07.676 [2024-10-28 15:22:54.324494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.676 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:07.676 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:27:07.676 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:07.676 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:07.676 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:07.676 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:07.676 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:27:07.676 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:07.676 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:07.676 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.676 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:07.676 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.967 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:07.967 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:07.967 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.968 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:07.968 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.968 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:07.968 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.968 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:07.968 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.968 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:07.968 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.968 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:07.968 [2024-10-28 15:22:54.674625] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:07.968 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.968 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:07.968 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.968 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:07.968 Malloc1 00:27:07.968 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.968 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:07.968 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.968 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:07.968 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.968 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:07.968 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.968 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:07.968 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.968 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:07.968 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.968 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:07.968 [2024-10-28 15:22:54.741688] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:07.968 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.968 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3241257 00:27:07.968 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:27:07.968 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:09.894 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:27:09.894 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.894 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.152 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.152 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:27:10.152 "tick_rate": 2700000000, 00:27:10.152 "poll_groups": [ 00:27:10.152 { 00:27:10.152 "name": "nvmf_tgt_poll_group_000", 00:27:10.152 "admin_qpairs": 1, 00:27:10.152 "io_qpairs": 1, 00:27:10.152 "current_admin_qpairs": 1, 00:27:10.152 "current_io_qpairs": 1, 00:27:10.152 "pending_bdev_io": 0, 00:27:10.152 "completed_nvme_io": 25447, 00:27:10.152 "transports": [ 00:27:10.152 { 00:27:10.152 "trtype": "TCP" 00:27:10.152 } 00:27:10.152 ] 00:27:10.152 }, 00:27:10.152 { 00:27:10.152 "name": "nvmf_tgt_poll_group_001", 00:27:10.152 "admin_qpairs": 0, 00:27:10.152 "io_qpairs": 3, 00:27:10.152 "current_admin_qpairs": 0, 00:27:10.152 "current_io_qpairs": 3, 00:27:10.152 "pending_bdev_io": 0, 00:27:10.152 "completed_nvme_io": 25328, 00:27:10.152 "transports": [ 00:27:10.152 { 00:27:10.152 "trtype": "TCP" 00:27:10.152 } 00:27:10.152 ] 00:27:10.152 }, 00:27:10.152 { 00:27:10.152 "name": "nvmf_tgt_poll_group_002", 00:27:10.152 "admin_qpairs": 0, 00:27:10.152 "io_qpairs": 0, 00:27:10.152 "current_admin_qpairs": 0, 00:27:10.152 "current_io_qpairs": 0, 00:27:10.152 "pending_bdev_io": 0, 00:27:10.152 "completed_nvme_io": 0, 00:27:10.152 "transports": [ 00:27:10.152 { 00:27:10.152 "trtype": "TCP" 00:27:10.152 } 00:27:10.152 ] 00:27:10.152 }, 00:27:10.152 { 00:27:10.152 "name": "nvmf_tgt_poll_group_003", 00:27:10.152 "admin_qpairs": 0, 00:27:10.152 "io_qpairs": 0, 00:27:10.152 "current_admin_qpairs": 0, 00:27:10.152 "current_io_qpairs": 0, 00:27:10.152 "pending_bdev_io": 0, 00:27:10.152 "completed_nvme_io": 0, 00:27:10.152 "transports": [ 00:27:10.152 { 00:27:10.152 "trtype": "TCP" 00:27:10.152 } 00:27:10.152 ] 00:27:10.152 } 00:27:10.152 ] 00:27:10.152 }' 00:27:10.152 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:10.152 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:27:10.152 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:27:10.152 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:27:10.152 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3241257 00:27:18.259 Initializing NVMe Controllers 00:27:18.259 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:18.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:18.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:18.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:18.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:18.259 Initialization complete. Launching workers. 00:27:18.259 ======================================================== 00:27:18.259 Latency(us) 00:27:18.259 Device Information : IOPS MiB/s Average min max 00:27:18.259 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 3959.00 15.46 16168.77 2291.16 63486.64 00:27:18.259 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4934.00 19.27 12972.77 1949.53 61449.37 00:27:18.259 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4626.70 18.07 13836.25 2193.36 61383.61 00:27:18.259 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13527.00 52.84 4731.24 2297.74 6893.54 00:27:18.259 ======================================================== 00:27:18.259 Total : 27046.70 105.65 9466.42 1949.53 63486.64 00:27:18.259 00:27:18.259 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:27:18.259 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:18.259 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:27:18.259 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:18.259 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:27:18.259 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:18.259 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:18.259 rmmod nvme_tcp 00:27:18.259 rmmod nvme_fabrics 00:27:18.259 rmmod nvme_keyring 00:27:18.259 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:18.259 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:27:18.259 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:27:18.259 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3241101 ']' 00:27:18.259 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3241101 00:27:18.259 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3241101 ']' 00:27:18.259 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3241101 00:27:18.259 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:27:18.259 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:18.259 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3241101 00:27:18.259 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:18.259 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:18.259 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3241101' 00:27:18.259 killing process with pid 3241101 00:27:18.260 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3241101 00:27:18.260 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3241101 00:27:18.520 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:18.520 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:18.520 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:18.520 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:27:18.520 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:27:18.520 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:18.520 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:27:18.780 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:18.780 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:18.780 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.780 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:18.780 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.693 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:20.693 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:27:20.693 00:27:20.693 real 0m46.458s 00:27:20.693 user 2m43.589s 00:27:20.693 sys 0m10.832s 00:27:20.693 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:20.693 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:20.693 ************************************ 00:27:20.693 END TEST nvmf_perf_adq 00:27:20.693 ************************************ 00:27:20.693 15:23:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:20.693 15:23:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:20.693 15:23:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:20.693 15:23:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:20.693 ************************************ 00:27:20.693 START TEST nvmf_shutdown 00:27:20.693 ************************************ 00:27:20.693 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:20.693 * Looking for test storage... 00:27:20.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:20.693 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:27:20.694 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1689 -- # lcov --version 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:27:20.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.953 --rc genhtml_branch_coverage=1 00:27:20.953 --rc genhtml_function_coverage=1 00:27:20.953 --rc genhtml_legend=1 00:27:20.953 --rc geninfo_all_blocks=1 00:27:20.953 --rc geninfo_unexecuted_blocks=1 00:27:20.953 00:27:20.953 ' 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:27:20.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.953 --rc genhtml_branch_coverage=1 00:27:20.953 --rc genhtml_function_coverage=1 00:27:20.953 --rc genhtml_legend=1 00:27:20.953 --rc geninfo_all_blocks=1 00:27:20.953 --rc geninfo_unexecuted_blocks=1 00:27:20.953 00:27:20.953 ' 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:27:20.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.953 --rc genhtml_branch_coverage=1 00:27:20.953 --rc genhtml_function_coverage=1 00:27:20.953 --rc genhtml_legend=1 00:27:20.953 --rc geninfo_all_blocks=1 00:27:20.953 --rc geninfo_unexecuted_blocks=1 00:27:20.953 00:27:20.953 ' 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:27:20.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.953 --rc genhtml_branch_coverage=1 00:27:20.953 --rc genhtml_function_coverage=1 00:27:20.953 --rc genhtml_legend=1 00:27:20.953 --rc geninfo_all_blocks=1 00:27:20.953 --rc geninfo_unexecuted_blocks=1 00:27:20.953 00:27:20.953 ' 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:20.953 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:20.954 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.954 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.954 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.954 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:20.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:20.954 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:20.954 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:20.954 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:20.954 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:20.954 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:20.954 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:20.954 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:20.954 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:20.954 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:20.954 ************************************ 00:27:20.954 START TEST nvmf_shutdown_tc1 00:27:20.954 ************************************ 00:27:20.954 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:27:20.954 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:27:20.954 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:20.954 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:20.954 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.954 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:20.954 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:20.954 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:20.954 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.954 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:20.954 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.954 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:20.954 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:20.954 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:27:20.954 15:23:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:24.248 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:24.248 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:24.248 Found net devices under 0000:84:00.0: cvl_0_0 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:24.248 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:24.249 Found net devices under 0000:84:00.1: cvl_0_1 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:24.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:24.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:27:24.249 00:27:24.249 --- 10.0.0.2 ping statistics --- 00:27:24.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.249 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:24.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:24.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:27:24.249 00:27:24.249 --- 10.0.0.1 ping statistics --- 00:27:24.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.249 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3244441 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3244441 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3244441 ']' 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:24.249 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:24.249 [2024-10-28 15:23:10.718481] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:27:24.249 [2024-10-28 15:23:10.718587] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:24.249 [2024-10-28 15:23:10.862615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:24.249 [2024-10-28 15:23:10.983541] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:24.249 [2024-10-28 15:23:10.983669] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:24.249 [2024-10-28 15:23:10.983709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:24.249 [2024-10-28 15:23:10.983738] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:24.249 [2024-10-28 15:23:10.983763] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:24.249 [2024-10-28 15:23:10.987336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:24.249 [2024-10-28 15:23:10.987435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:24.249 [2024-10-28 15:23:10.987487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:24.249 [2024-10-28 15:23:10.987491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:25.623 [2024-10-28 15:23:12.125016] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.623 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:25.623 Malloc1 00:27:25.623 [2024-10-28 15:23:12.239663] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:25.623 Malloc2 00:27:25.623 Malloc3 00:27:25.623 Malloc4 00:27:25.623 Malloc5 00:27:25.623 Malloc6 00:27:25.883 Malloc7 00:27:25.883 Malloc8 00:27:25.883 Malloc9 00:27:25.883 Malloc10 00:27:25.883 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.883 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:25.883 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:25.883 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:25.883 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3244752 00:27:25.883 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3244752 /var/tmp/bdevperf.sock 00:27:25.883 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3244752 ']' 00:27:25.883 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:25.883 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:25.883 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:25.883 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:25.883 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:27:25.883 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:25.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:25.883 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:27:25.883 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:25.883 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:25.883 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:25.883 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:25.883 { 00:27:25.883 "params": { 00:27:25.883 "name": "Nvme$subsystem", 00:27:25.883 "trtype": "$TEST_TRANSPORT", 00:27:25.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.883 "adrfam": "ipv4", 00:27:25.883 "trsvcid": "$NVMF_PORT", 00:27:25.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.883 "hdgst": ${hdgst:-false}, 00:27:25.883 "ddgst": ${ddgst:-false} 00:27:25.883 }, 00:27:25.883 "method": "bdev_nvme_attach_controller" 00:27:25.883 } 00:27:25.883 EOF 00:27:25.883 )") 00:27:25.883 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:25.883 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:25.883 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:25.883 { 00:27:25.883 "params": { 00:27:25.883 "name": "Nvme$subsystem", 00:27:25.883 "trtype": "$TEST_TRANSPORT", 00:27:25.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.883 "adrfam": "ipv4", 00:27:25.883 "trsvcid": "$NVMF_PORT", 00:27:25.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.883 "hdgst": ${hdgst:-false}, 00:27:25.883 "ddgst": ${ddgst:-false} 00:27:25.883 }, 00:27:25.883 "method": "bdev_nvme_attach_controller" 00:27:25.883 } 00:27:25.883 EOF 00:27:25.883 )") 00:27:25.883 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:25.883 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:25.883 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:25.883 { 00:27:25.883 "params": { 00:27:25.883 "name": "Nvme$subsystem", 00:27:25.883 "trtype": "$TEST_TRANSPORT", 00:27:25.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.883 "adrfam": "ipv4", 00:27:25.883 "trsvcid": "$NVMF_PORT", 00:27:25.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.884 "hdgst": ${hdgst:-false}, 00:27:25.884 "ddgst": ${ddgst:-false} 00:27:25.884 }, 00:27:25.884 "method": "bdev_nvme_attach_controller" 00:27:25.884 } 00:27:25.884 EOF 00:27:25.884 )") 00:27:25.884 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:25.884 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:25.884 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:25.884 { 00:27:25.884 "params": { 00:27:25.884 "name": "Nvme$subsystem", 00:27:25.884 "trtype": "$TEST_TRANSPORT", 00:27:25.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.884 "adrfam": "ipv4", 00:27:25.884 "trsvcid": "$NVMF_PORT", 00:27:25.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.884 "hdgst": ${hdgst:-false}, 00:27:25.884 "ddgst": ${ddgst:-false} 00:27:25.884 }, 00:27:25.884 "method": "bdev_nvme_attach_controller" 00:27:25.884 } 00:27:25.884 EOF 00:27:25.884 )") 00:27:25.884 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:25.884 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:25.884 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:25.884 { 00:27:25.884 "params": { 00:27:25.884 "name": "Nvme$subsystem", 00:27:25.884 "trtype": "$TEST_TRANSPORT", 00:27:25.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.884 "adrfam": "ipv4", 00:27:25.884 "trsvcid": "$NVMF_PORT", 00:27:25.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.884 "hdgst": ${hdgst:-false}, 00:27:25.884 "ddgst": ${ddgst:-false} 00:27:25.884 }, 00:27:25.884 "method": "bdev_nvme_attach_controller" 00:27:25.884 } 00:27:25.884 EOF 00:27:25.884 )") 00:27:25.884 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:25.884 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:25.884 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:25.884 { 00:27:25.884 "params": { 00:27:25.884 "name": "Nvme$subsystem", 00:27:25.884 "trtype": "$TEST_TRANSPORT", 00:27:25.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.884 "adrfam": "ipv4", 00:27:25.884 "trsvcid": "$NVMF_PORT", 00:27:25.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.884 "hdgst": ${hdgst:-false}, 00:27:25.884 "ddgst": ${ddgst:-false} 00:27:25.884 }, 00:27:25.884 "method": "bdev_nvme_attach_controller" 00:27:25.884 } 00:27:25.884 EOF 00:27:25.884 )") 00:27:25.884 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:25.884 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:25.884 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:25.884 { 00:27:25.884 "params": { 00:27:25.884 "name": "Nvme$subsystem", 00:27:25.884 "trtype": "$TEST_TRANSPORT", 00:27:25.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.884 "adrfam": "ipv4", 00:27:25.884 "trsvcid": "$NVMF_PORT", 00:27:25.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.884 "hdgst": ${hdgst:-false}, 00:27:25.884 "ddgst": ${ddgst:-false} 00:27:25.884 }, 00:27:25.884 "method": "bdev_nvme_attach_controller" 00:27:25.884 } 00:27:25.884 EOF 00:27:25.884 )") 00:27:25.884 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:25.884 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:25.884 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:25.884 { 00:27:25.884 "params": { 00:27:25.884 "name": "Nvme$subsystem", 00:27:25.884 "trtype": "$TEST_TRANSPORT", 00:27:25.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.884 "adrfam": "ipv4", 00:27:25.884 "trsvcid": "$NVMF_PORT", 00:27:25.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.884 "hdgst": ${hdgst:-false}, 00:27:25.884 "ddgst": ${ddgst:-false} 00:27:25.884 }, 00:27:25.884 "method": "bdev_nvme_attach_controller" 00:27:25.884 } 00:27:25.884 EOF 00:27:25.884 )") 00:27:25.884 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:25.884 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:25.884 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:25.884 { 00:27:25.884 "params": { 00:27:25.884 "name": "Nvme$subsystem", 00:27:25.884 "trtype": "$TEST_TRANSPORT", 00:27:25.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.884 "adrfam": "ipv4", 00:27:25.884 "trsvcid": "$NVMF_PORT", 00:27:25.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.884 "hdgst": ${hdgst:-false}, 00:27:25.884 "ddgst": ${ddgst:-false} 00:27:25.884 }, 00:27:25.884 "method": "bdev_nvme_attach_controller" 00:27:25.884 } 00:27:25.884 EOF 00:27:25.884 )") 00:27:25.884 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:25.884 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:25.884 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:25.884 { 00:27:25.884 "params": { 00:27:25.884 "name": "Nvme$subsystem", 00:27:25.884 "trtype": "$TEST_TRANSPORT", 00:27:25.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.884 "adrfam": "ipv4", 00:27:25.884 "trsvcid": "$NVMF_PORT", 00:27:25.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.884 "hdgst": ${hdgst:-false}, 00:27:25.884 "ddgst": ${ddgst:-false} 00:27:25.884 }, 00:27:25.884 "method": "bdev_nvme_attach_controller" 00:27:25.884 } 00:27:25.884 EOF 00:27:25.884 )") 00:27:25.884 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:26.143 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:27:26.143 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:27:26.143 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:26.143 "params": { 00:27:26.143 "name": "Nvme1", 00:27:26.143 "trtype": "tcp", 00:27:26.143 "traddr": "10.0.0.2", 00:27:26.143 "adrfam": "ipv4", 00:27:26.143 "trsvcid": "4420", 00:27:26.143 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:26.143 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:26.143 "hdgst": false, 00:27:26.143 "ddgst": false 00:27:26.143 }, 00:27:26.143 "method": "bdev_nvme_attach_controller" 00:27:26.143 },{ 00:27:26.143 "params": { 00:27:26.143 "name": "Nvme2", 00:27:26.143 "trtype": "tcp", 00:27:26.143 "traddr": "10.0.0.2", 00:27:26.143 "adrfam": "ipv4", 00:27:26.143 "trsvcid": "4420", 00:27:26.143 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:26.143 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:26.143 "hdgst": false, 00:27:26.143 "ddgst": false 00:27:26.143 }, 00:27:26.143 "method": "bdev_nvme_attach_controller" 00:27:26.143 },{ 00:27:26.143 "params": { 00:27:26.143 "name": "Nvme3", 00:27:26.143 "trtype": "tcp", 00:27:26.143 "traddr": "10.0.0.2", 00:27:26.143 "adrfam": "ipv4", 00:27:26.143 "trsvcid": "4420", 00:27:26.143 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:26.143 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:26.143 "hdgst": false, 00:27:26.143 "ddgst": false 00:27:26.143 }, 00:27:26.143 "method": "bdev_nvme_attach_controller" 00:27:26.143 },{ 00:27:26.143 "params": { 00:27:26.143 "name": "Nvme4", 00:27:26.143 "trtype": "tcp", 00:27:26.143 "traddr": "10.0.0.2", 00:27:26.143 "adrfam": "ipv4", 00:27:26.143 "trsvcid": "4420", 00:27:26.143 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:26.143 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:26.143 "hdgst": false, 00:27:26.143 "ddgst": false 00:27:26.143 }, 00:27:26.143 "method": "bdev_nvme_attach_controller" 00:27:26.143 },{ 00:27:26.143 "params": { 00:27:26.143 "name": "Nvme5", 00:27:26.143 "trtype": "tcp", 00:27:26.143 "traddr": "10.0.0.2", 00:27:26.143 "adrfam": "ipv4", 00:27:26.143 "trsvcid": "4420", 00:27:26.143 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:26.143 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:26.143 "hdgst": false, 00:27:26.143 "ddgst": false 00:27:26.143 }, 00:27:26.143 "method": "bdev_nvme_attach_controller" 00:27:26.143 },{ 00:27:26.143 "params": { 00:27:26.143 "name": "Nvme6", 00:27:26.143 "trtype": "tcp", 00:27:26.143 "traddr": "10.0.0.2", 00:27:26.143 "adrfam": "ipv4", 00:27:26.143 "trsvcid": "4420", 00:27:26.143 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:26.143 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:26.143 "hdgst": false, 00:27:26.143 "ddgst": false 00:27:26.143 }, 00:27:26.143 "method": "bdev_nvme_attach_controller" 00:27:26.143 },{ 00:27:26.144 "params": { 00:27:26.144 "name": "Nvme7", 00:27:26.144 "trtype": "tcp", 00:27:26.144 "traddr": "10.0.0.2", 00:27:26.144 "adrfam": "ipv4", 00:27:26.144 "trsvcid": "4420", 00:27:26.144 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:26.144 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:26.144 "hdgst": false, 00:27:26.144 "ddgst": false 00:27:26.144 }, 00:27:26.144 "method": "bdev_nvme_attach_controller" 00:27:26.144 },{ 00:27:26.144 "params": { 00:27:26.144 "name": "Nvme8", 00:27:26.144 "trtype": "tcp", 00:27:26.144 "traddr": "10.0.0.2", 00:27:26.144 "adrfam": "ipv4", 00:27:26.144 "trsvcid": "4420", 00:27:26.144 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:26.144 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:26.144 "hdgst": false, 00:27:26.144 "ddgst": false 00:27:26.144 }, 00:27:26.144 "method": "bdev_nvme_attach_controller" 00:27:26.144 },{ 00:27:26.144 "params": { 00:27:26.144 "name": "Nvme9", 00:27:26.144 "trtype": "tcp", 00:27:26.144 "traddr": "10.0.0.2", 00:27:26.144 "adrfam": "ipv4", 00:27:26.144 "trsvcid": "4420", 00:27:26.144 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:26.144 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:26.144 "hdgst": false, 00:27:26.144 "ddgst": false 00:27:26.144 }, 00:27:26.144 "method": "bdev_nvme_attach_controller" 00:27:26.144 },{ 00:27:26.144 "params": { 00:27:26.144 "name": "Nvme10", 00:27:26.144 "trtype": "tcp", 00:27:26.144 "traddr": "10.0.0.2", 00:27:26.144 "adrfam": "ipv4", 00:27:26.144 "trsvcid": "4420", 00:27:26.144 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:26.144 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:26.144 "hdgst": false, 00:27:26.144 "ddgst": false 00:27:26.144 }, 00:27:26.144 "method": "bdev_nvme_attach_controller" 00:27:26.144 }' 00:27:26.144 [2024-10-28 15:23:12.761884] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:27:26.144 [2024-10-28 15:23:12.761992] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:26.144 [2024-10-28 15:23:12.835060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.144 [2024-10-28 15:23:12.894720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.042 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:28.042 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:27:28.042 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:28.042 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.042 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:28.042 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.042 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3244752 00:27:28.042 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:27:28.042 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:27:29.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3244752 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:29.415 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3244441 00:27:29.415 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:29.415 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:29.415 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:27:29.415 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:27:29.415 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:29.415 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:29.415 { 00:27:29.415 "params": { 00:27:29.415 "name": "Nvme$subsystem", 00:27:29.415 "trtype": "$TEST_TRANSPORT", 00:27:29.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.415 "adrfam": "ipv4", 00:27:29.415 "trsvcid": "$NVMF_PORT", 00:27:29.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.415 "hdgst": ${hdgst:-false}, 00:27:29.415 "ddgst": ${ddgst:-false} 00:27:29.415 }, 00:27:29.415 "method": "bdev_nvme_attach_controller" 00:27:29.415 } 00:27:29.415 EOF 00:27:29.415 )") 00:27:29.415 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:29.415 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:29.415 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:29.415 { 00:27:29.415 "params": { 00:27:29.415 "name": "Nvme$subsystem", 00:27:29.415 "trtype": "$TEST_TRANSPORT", 00:27:29.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.415 "adrfam": "ipv4", 00:27:29.415 "trsvcid": "$NVMF_PORT", 00:27:29.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.415 "hdgst": ${hdgst:-false}, 00:27:29.415 "ddgst": ${ddgst:-false} 00:27:29.415 }, 00:27:29.415 "method": "bdev_nvme_attach_controller" 00:27:29.415 } 00:27:29.415 EOF 00:27:29.415 )") 00:27:29.415 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:29.415 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:29.415 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:29.415 { 00:27:29.415 "params": { 00:27:29.415 "name": "Nvme$subsystem", 00:27:29.415 "trtype": "$TEST_TRANSPORT", 00:27:29.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.415 "adrfam": "ipv4", 00:27:29.415 "trsvcid": "$NVMF_PORT", 00:27:29.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.415 "hdgst": ${hdgst:-false}, 00:27:29.415 "ddgst": ${ddgst:-false} 00:27:29.415 }, 00:27:29.415 "method": "bdev_nvme_attach_controller" 00:27:29.415 } 00:27:29.415 EOF 00:27:29.415 )") 00:27:29.415 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:29.415 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:29.415 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:29.415 { 00:27:29.415 "params": { 00:27:29.415 "name": "Nvme$subsystem", 00:27:29.415 "trtype": "$TEST_TRANSPORT", 00:27:29.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.415 "adrfam": "ipv4", 00:27:29.415 "trsvcid": "$NVMF_PORT", 00:27:29.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.415 "hdgst": ${hdgst:-false}, 00:27:29.415 "ddgst": ${ddgst:-false} 00:27:29.415 }, 00:27:29.415 "method": "bdev_nvme_attach_controller" 00:27:29.415 } 00:27:29.415 EOF 00:27:29.415 )") 00:27:29.415 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:29.415 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:29.416 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:29.416 { 00:27:29.416 "params": { 00:27:29.416 "name": "Nvme$subsystem", 00:27:29.416 "trtype": "$TEST_TRANSPORT", 00:27:29.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.416 "adrfam": "ipv4", 00:27:29.416 "trsvcid": "$NVMF_PORT", 00:27:29.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.416 "hdgst": ${hdgst:-false}, 00:27:29.416 "ddgst": ${ddgst:-false} 00:27:29.416 }, 00:27:29.416 "method": "bdev_nvme_attach_controller" 00:27:29.416 } 00:27:29.416 EOF 00:27:29.416 )") 00:27:29.416 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:29.416 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:29.416 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:29.416 { 00:27:29.416 "params": { 00:27:29.416 "name": "Nvme$subsystem", 00:27:29.416 "trtype": "$TEST_TRANSPORT", 00:27:29.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.416 "adrfam": "ipv4", 00:27:29.416 "trsvcid": "$NVMF_PORT", 00:27:29.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.416 "hdgst": ${hdgst:-false}, 00:27:29.416 "ddgst": ${ddgst:-false} 00:27:29.416 }, 00:27:29.416 "method": "bdev_nvme_attach_controller" 00:27:29.416 } 00:27:29.416 EOF 00:27:29.416 )") 00:27:29.416 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:29.416 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:29.416 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:29.416 { 00:27:29.416 "params": { 00:27:29.416 "name": "Nvme$subsystem", 00:27:29.416 "trtype": "$TEST_TRANSPORT", 00:27:29.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.416 "adrfam": "ipv4", 00:27:29.416 "trsvcid": "$NVMF_PORT", 00:27:29.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.416 "hdgst": ${hdgst:-false}, 00:27:29.416 "ddgst": ${ddgst:-false} 00:27:29.416 }, 00:27:29.416 "method": "bdev_nvme_attach_controller" 00:27:29.416 } 00:27:29.416 EOF 00:27:29.416 )") 00:27:29.416 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:29.416 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:29.416 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:29.416 { 00:27:29.416 "params": { 00:27:29.416 "name": "Nvme$subsystem", 00:27:29.416 "trtype": "$TEST_TRANSPORT", 00:27:29.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.416 "adrfam": "ipv4", 00:27:29.416 "trsvcid": "$NVMF_PORT", 00:27:29.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.416 "hdgst": ${hdgst:-false}, 00:27:29.416 "ddgst": ${ddgst:-false} 00:27:29.416 }, 00:27:29.416 "method": "bdev_nvme_attach_controller" 00:27:29.416 } 00:27:29.416 EOF 00:27:29.416 )") 00:27:29.416 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:29.416 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:29.416 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:29.416 { 00:27:29.416 "params": { 00:27:29.416 "name": "Nvme$subsystem", 00:27:29.416 "trtype": "$TEST_TRANSPORT", 00:27:29.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.416 "adrfam": "ipv4", 00:27:29.416 "trsvcid": "$NVMF_PORT", 00:27:29.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.416 "hdgst": ${hdgst:-false}, 00:27:29.416 "ddgst": ${ddgst:-false} 00:27:29.416 }, 00:27:29.416 "method": "bdev_nvme_attach_controller" 00:27:29.416 } 00:27:29.416 EOF 00:27:29.416 )") 00:27:29.416 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:29.416 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:29.416 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:29.416 { 00:27:29.416 "params": { 00:27:29.416 "name": "Nvme$subsystem", 00:27:29.416 "trtype": "$TEST_TRANSPORT", 00:27:29.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.416 "adrfam": "ipv4", 00:27:29.416 "trsvcid": "$NVMF_PORT", 00:27:29.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.416 "hdgst": ${hdgst:-false}, 00:27:29.416 "ddgst": ${ddgst:-false} 00:27:29.416 }, 00:27:29.416 "method": "bdev_nvme_attach_controller" 00:27:29.416 } 00:27:29.416 EOF 00:27:29.416 )") 00:27:29.416 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:29.416 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:27:29.416 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:27:29.416 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:29.416 "params": { 00:27:29.416 "name": "Nvme1", 00:27:29.416 "trtype": "tcp", 00:27:29.416 "traddr": "10.0.0.2", 00:27:29.416 "adrfam": "ipv4", 00:27:29.416 "trsvcid": "4420", 00:27:29.416 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:29.416 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:29.416 "hdgst": false, 00:27:29.416 "ddgst": false 00:27:29.416 }, 00:27:29.416 "method": "bdev_nvme_attach_controller" 00:27:29.416 },{ 00:27:29.416 "params": { 00:27:29.416 "name": "Nvme2", 00:27:29.416 "trtype": "tcp", 00:27:29.416 "traddr": "10.0.0.2", 00:27:29.416 "adrfam": "ipv4", 00:27:29.416 "trsvcid": "4420", 00:27:29.416 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:29.416 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:29.416 "hdgst": false, 00:27:29.416 "ddgst": false 00:27:29.416 }, 00:27:29.416 "method": "bdev_nvme_attach_controller" 00:27:29.416 },{ 00:27:29.416 "params": { 00:27:29.416 "name": "Nvme3", 00:27:29.416 "trtype": "tcp", 00:27:29.416 "traddr": "10.0.0.2", 00:27:29.416 "adrfam": "ipv4", 00:27:29.416 "trsvcid": "4420", 00:27:29.416 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:29.416 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:29.416 "hdgst": false, 00:27:29.416 "ddgst": false 00:27:29.416 }, 00:27:29.416 "method": "bdev_nvme_attach_controller" 00:27:29.416 },{ 00:27:29.416 "params": { 00:27:29.416 "name": "Nvme4", 00:27:29.416 "trtype": "tcp", 00:27:29.416 "traddr": "10.0.0.2", 00:27:29.416 "adrfam": "ipv4", 00:27:29.416 "trsvcid": "4420", 00:27:29.416 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:29.416 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:29.416 "hdgst": false, 00:27:29.416 "ddgst": false 00:27:29.416 }, 00:27:29.416 "method": "bdev_nvme_attach_controller" 00:27:29.416 },{ 00:27:29.416 "params": { 00:27:29.416 "name": "Nvme5", 00:27:29.416 "trtype": "tcp", 00:27:29.416 "traddr": "10.0.0.2", 00:27:29.416 "adrfam": "ipv4", 00:27:29.416 "trsvcid": "4420", 00:27:29.416 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:29.416 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:29.416 "hdgst": false, 00:27:29.416 "ddgst": false 00:27:29.416 }, 00:27:29.416 "method": "bdev_nvme_attach_controller" 00:27:29.416 },{ 00:27:29.416 "params": { 00:27:29.416 "name": "Nvme6", 00:27:29.416 "trtype": "tcp", 00:27:29.416 "traddr": "10.0.0.2", 00:27:29.416 "adrfam": "ipv4", 00:27:29.416 "trsvcid": "4420", 00:27:29.416 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:29.416 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:29.416 "hdgst": false, 00:27:29.416 "ddgst": false 00:27:29.416 }, 00:27:29.416 "method": "bdev_nvme_attach_controller" 00:27:29.416 },{ 00:27:29.416 "params": { 00:27:29.416 "name": "Nvme7", 00:27:29.416 "trtype": "tcp", 00:27:29.416 "traddr": "10.0.0.2", 00:27:29.416 "adrfam": "ipv4", 00:27:29.416 "trsvcid": "4420", 00:27:29.416 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:29.416 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:29.416 "hdgst": false, 00:27:29.416 "ddgst": false 00:27:29.416 }, 00:27:29.416 "method": "bdev_nvme_attach_controller" 00:27:29.416 },{ 00:27:29.416 "params": { 00:27:29.416 "name": "Nvme8", 00:27:29.416 "trtype": "tcp", 00:27:29.416 "traddr": "10.0.0.2", 00:27:29.416 "adrfam": "ipv4", 00:27:29.416 "trsvcid": "4420", 00:27:29.416 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:29.416 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:29.416 "hdgst": false, 00:27:29.416 "ddgst": false 00:27:29.416 }, 00:27:29.416 "method": "bdev_nvme_attach_controller" 00:27:29.416 },{ 00:27:29.416 "params": { 00:27:29.416 "name": "Nvme9", 00:27:29.416 "trtype": "tcp", 00:27:29.416 "traddr": "10.0.0.2", 00:27:29.416 "adrfam": "ipv4", 00:27:29.416 "trsvcid": "4420", 00:27:29.416 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:29.416 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:29.416 "hdgst": false, 00:27:29.416 "ddgst": false 00:27:29.416 }, 00:27:29.416 "method": "bdev_nvme_attach_controller" 00:27:29.416 },{ 00:27:29.416 "params": { 00:27:29.416 "name": "Nvme10", 00:27:29.416 "trtype": "tcp", 00:27:29.416 "traddr": "10.0.0.2", 00:27:29.416 "adrfam": "ipv4", 00:27:29.416 "trsvcid": "4420", 00:27:29.416 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:29.416 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:29.417 "hdgst": false, 00:27:29.417 "ddgst": false 00:27:29.417 }, 00:27:29.417 "method": "bdev_nvme_attach_controller" 00:27:29.417 }' 00:27:29.417 [2024-10-28 15:23:15.936750] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:27:29.417 [2024-10-28 15:23:15.936836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3245165 ] 00:27:29.417 [2024-10-28 15:23:16.015718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.417 [2024-10-28 15:23:16.077868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.316 Running I/O for 1 seconds... 00:27:32.250 1677.00 IOPS, 104.81 MiB/s 00:27:32.250 Latency(us) 00:27:32.250 [2024-10-28T14:23:19.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:32.250 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:32.250 Verification LBA range: start 0x0 length 0x400 00:27:32.250 Nvme1n1 : 1.17 218.09 13.63 0.00 0.00 288820.15 19515.16 270299.59 00:27:32.250 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:32.250 Verification LBA range: start 0x0 length 0x400 00:27:32.250 Nvme2n1 : 1.17 221.74 13.86 0.00 0.00 280163.04 5509.88 267192.70 00:27:32.250 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:32.250 Verification LBA range: start 0x0 length 0x400 00:27:32.250 Nvme3n1 : 1.16 220.52 13.78 0.00 0.00 277260.89 18835.53 268746.15 00:27:32.250 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:32.250 Verification LBA range: start 0x0 length 0x400 00:27:32.250 Nvme4n1 : 1.15 225.19 14.07 0.00 0.00 265456.46 7670.14 273406.48 00:27:32.250 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:32.250 Verification LBA range: start 0x0 length 0x400 00:27:32.250 Nvme5n1 : 1.19 214.47 13.40 0.00 0.00 274307.03 19709.35 287387.50 00:27:32.250 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:32.250 Verification LBA range: start 0x0 length 0x400 00:27:32.250 Nvme6n1 : 1.19 215.78 13.49 0.00 0.00 269087.29 22524.97 268746.15 00:27:32.250 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:32.250 Verification LBA range: start 0x0 length 0x400 00:27:32.250 Nvme7n1 : 1.18 216.76 13.55 0.00 0.00 262815.10 20874.43 265639.25 00:27:32.250 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:32.250 Verification LBA range: start 0x0 length 0x400 00:27:32.250 Nvme8n1 : 1.20 217.43 13.59 0.00 0.00 256615.64 4684.61 270299.59 00:27:32.250 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:32.250 Verification LBA range: start 0x0 length 0x400 00:27:32.250 Nvme9n1 : 1.21 211.85 13.24 0.00 0.00 259697.59 21262.79 288940.94 00:27:32.250 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:32.250 Verification LBA range: start 0x0 length 0x400 00:27:32.250 Nvme10n1 : 1.20 213.09 13.32 0.00 0.00 253847.89 18641.35 288940.94 00:27:32.250 [2024-10-28T14:23:19.117Z] =================================================================================================================== 00:27:32.250 [2024-10-28T14:23:19.117Z] Total : 2174.93 135.93 0.00 0.00 268800.60 4684.61 288940.94 00:27:32.508 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:27:32.508 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:32.508 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:32.508 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:32.508 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:32.508 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:32.508 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:27:32.508 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:32.508 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:27:32.508 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:32.508 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:32.508 rmmod nvme_tcp 00:27:32.508 rmmod nvme_fabrics 00:27:32.508 rmmod nvme_keyring 00:27:32.508 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:32.508 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:27:32.508 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:27:32.508 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3244441 ']' 00:27:32.509 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3244441 00:27:32.509 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 3244441 ']' 00:27:32.509 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 3244441 00:27:32.509 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:27:32.509 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:32.509 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3244441 00:27:32.509 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:32.509 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:32.509 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3244441' 00:27:32.509 killing process with pid 3244441 00:27:32.509 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 3244441 00:27:32.509 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 3244441 00:27:33.075 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:33.075 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:33.075 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:33.075 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:27:33.075 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:27:33.075 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:33.075 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:27:33.335 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:33.335 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:33.335 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.335 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:33.335 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.245 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:35.245 00:27:35.245 real 0m14.298s 00:27:35.245 user 0m41.614s 00:27:35.245 sys 0m4.080s 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:35.245 ************************************ 00:27:35.245 END TEST nvmf_shutdown_tc1 00:27:35.245 ************************************ 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:35.245 ************************************ 00:27:35.245 START TEST nvmf_shutdown_tc2 00:27:35.245 ************************************ 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:35.245 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:35.245 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:35.246 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:35.246 Found net devices under 0000:84:00.0: cvl_0_0 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:35.246 Found net devices under 0000:84:00.1: cvl_0_1 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:35.246 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:35.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:35.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:27:35.507 00:27:35.507 --- 10.0.0.2 ping statistics --- 00:27:35.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:35.507 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:35.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:35.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:27:35.507 00:27:35.507 --- 10.0.0.1 ping statistics --- 00:27:35.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:35.507 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3245934 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3245934 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3245934 ']' 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:35.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:35.507 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:35.768 [2024-10-28 15:23:22.421923] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:27:35.768 [2024-10-28 15:23:22.422103] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:35.768 [2024-10-28 15:23:22.600595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:36.027 [2024-10-28 15:23:22.724679] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:36.027 [2024-10-28 15:23:22.724808] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:36.027 [2024-10-28 15:23:22.724865] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:36.027 [2024-10-28 15:23:22.724917] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:36.027 [2024-10-28 15:23:22.724947] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:36.027 [2024-10-28 15:23:22.728687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:36.027 [2024-10-28 15:23:22.728766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:36.027 [2024-10-28 15:23:22.728820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:36.027 [2024-10-28 15:23:22.728824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.027 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:36.027 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:27:36.027 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:36.027 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:36.027 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:36.027 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:36.027 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:36.027 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.027 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:36.285 [2024-10-28 15:23:22.893245] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:36.285 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.285 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:36.285 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:36.285 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:36.285 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:36.285 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:36.285 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:36.285 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:36.285 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:36.285 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:36.286 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:36.286 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:36.286 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:36.286 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:36.286 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:36.286 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:36.286 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:36.286 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:36.286 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:36.286 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:36.286 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:36.286 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:36.286 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:36.286 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:36.286 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:36.286 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:36.286 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:36.286 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.286 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:36.286 Malloc1 00:27:36.286 [2024-10-28 15:23:23.005077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:36.286 Malloc2 00:27:36.286 Malloc3 00:27:36.286 Malloc4 00:27:36.544 Malloc5 00:27:36.544 Malloc6 00:27:36.544 Malloc7 00:27:36.544 Malloc8 00:27:36.544 Malloc9 00:27:36.802 Malloc10 00:27:36.802 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.802 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:36.802 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:36.802 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3246111 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3246111 /var/tmp/bdevperf.sock 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3246111 ']' 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:36.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:36.803 { 00:27:36.803 "params": { 00:27:36.803 "name": "Nvme$subsystem", 00:27:36.803 "trtype": "$TEST_TRANSPORT", 00:27:36.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.803 "adrfam": "ipv4", 00:27:36.803 "trsvcid": "$NVMF_PORT", 00:27:36.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.803 "hdgst": ${hdgst:-false}, 00:27:36.803 "ddgst": ${ddgst:-false} 00:27:36.803 }, 00:27:36.803 "method": "bdev_nvme_attach_controller" 00:27:36.803 } 00:27:36.803 EOF 00:27:36.803 )") 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:36.803 { 00:27:36.803 "params": { 00:27:36.803 "name": "Nvme$subsystem", 00:27:36.803 "trtype": "$TEST_TRANSPORT", 00:27:36.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.803 "adrfam": "ipv4", 00:27:36.803 "trsvcid": "$NVMF_PORT", 00:27:36.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.803 "hdgst": ${hdgst:-false}, 00:27:36.803 "ddgst": ${ddgst:-false} 00:27:36.803 }, 00:27:36.803 "method": "bdev_nvme_attach_controller" 00:27:36.803 } 00:27:36.803 EOF 00:27:36.803 )") 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:36.803 { 00:27:36.803 "params": { 00:27:36.803 "name": "Nvme$subsystem", 00:27:36.803 "trtype": "$TEST_TRANSPORT", 00:27:36.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.803 "adrfam": "ipv4", 00:27:36.803 "trsvcid": "$NVMF_PORT", 00:27:36.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.803 "hdgst": ${hdgst:-false}, 00:27:36.803 "ddgst": ${ddgst:-false} 00:27:36.803 }, 00:27:36.803 "method": "bdev_nvme_attach_controller" 00:27:36.803 } 00:27:36.803 EOF 00:27:36.803 )") 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:36.803 { 00:27:36.803 "params": { 00:27:36.803 "name": "Nvme$subsystem", 00:27:36.803 "trtype": "$TEST_TRANSPORT", 00:27:36.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.803 "adrfam": "ipv4", 00:27:36.803 "trsvcid": "$NVMF_PORT", 00:27:36.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.803 "hdgst": ${hdgst:-false}, 00:27:36.803 "ddgst": ${ddgst:-false} 00:27:36.803 }, 00:27:36.803 "method": "bdev_nvme_attach_controller" 00:27:36.803 } 00:27:36.803 EOF 00:27:36.803 )") 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:36.803 { 00:27:36.803 "params": { 00:27:36.803 "name": "Nvme$subsystem", 00:27:36.803 "trtype": "$TEST_TRANSPORT", 00:27:36.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.803 "adrfam": "ipv4", 00:27:36.803 "trsvcid": "$NVMF_PORT", 00:27:36.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.803 "hdgst": ${hdgst:-false}, 00:27:36.803 "ddgst": ${ddgst:-false} 00:27:36.803 }, 00:27:36.803 "method": "bdev_nvme_attach_controller" 00:27:36.803 } 00:27:36.803 EOF 00:27:36.803 )") 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:36.803 { 00:27:36.803 "params": { 00:27:36.803 "name": "Nvme$subsystem", 00:27:36.803 "trtype": "$TEST_TRANSPORT", 00:27:36.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.803 "adrfam": "ipv4", 00:27:36.803 "trsvcid": "$NVMF_PORT", 00:27:36.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.803 "hdgst": ${hdgst:-false}, 00:27:36.803 "ddgst": ${ddgst:-false} 00:27:36.803 }, 00:27:36.803 "method": "bdev_nvme_attach_controller" 00:27:36.803 } 00:27:36.803 EOF 00:27:36.803 )") 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:36.803 { 00:27:36.803 "params": { 00:27:36.803 "name": "Nvme$subsystem", 00:27:36.803 "trtype": "$TEST_TRANSPORT", 00:27:36.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.803 "adrfam": "ipv4", 00:27:36.803 "trsvcid": "$NVMF_PORT", 00:27:36.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.803 "hdgst": ${hdgst:-false}, 00:27:36.803 "ddgst": ${ddgst:-false} 00:27:36.803 }, 00:27:36.803 "method": "bdev_nvme_attach_controller" 00:27:36.803 } 00:27:36.803 EOF 00:27:36.803 )") 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:36.803 { 00:27:36.803 "params": { 00:27:36.803 "name": "Nvme$subsystem", 00:27:36.803 "trtype": "$TEST_TRANSPORT", 00:27:36.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.803 "adrfam": "ipv4", 00:27:36.803 "trsvcid": "$NVMF_PORT", 00:27:36.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.803 "hdgst": ${hdgst:-false}, 00:27:36.803 "ddgst": ${ddgst:-false} 00:27:36.803 }, 00:27:36.803 "method": "bdev_nvme_attach_controller" 00:27:36.803 } 00:27:36.803 EOF 00:27:36.803 )") 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:36.803 { 00:27:36.803 "params": { 00:27:36.803 "name": "Nvme$subsystem", 00:27:36.803 "trtype": "$TEST_TRANSPORT", 00:27:36.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.803 "adrfam": "ipv4", 00:27:36.803 "trsvcid": "$NVMF_PORT", 00:27:36.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.803 "hdgst": ${hdgst:-false}, 00:27:36.803 "ddgst": ${ddgst:-false} 00:27:36.803 }, 00:27:36.803 "method": "bdev_nvme_attach_controller" 00:27:36.803 } 00:27:36.803 EOF 00:27:36.803 )") 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:36.803 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:36.803 { 00:27:36.803 "params": { 00:27:36.803 "name": "Nvme$subsystem", 00:27:36.803 "trtype": "$TEST_TRANSPORT", 00:27:36.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.804 "adrfam": "ipv4", 00:27:36.804 "trsvcid": "$NVMF_PORT", 00:27:36.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.804 "hdgst": ${hdgst:-false}, 00:27:36.804 "ddgst": ${ddgst:-false} 00:27:36.804 }, 00:27:36.804 "method": "bdev_nvme_attach_controller" 00:27:36.804 } 00:27:36.804 EOF 00:27:36.804 )") 00:27:36.804 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:36.804 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:27:36.804 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:27:36.804 15:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:36.804 "params": { 00:27:36.804 "name": "Nvme1", 00:27:36.804 "trtype": "tcp", 00:27:36.804 "traddr": "10.0.0.2", 00:27:36.804 "adrfam": "ipv4", 00:27:36.804 "trsvcid": "4420", 00:27:36.804 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:36.804 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:36.804 "hdgst": false, 00:27:36.804 "ddgst": false 00:27:36.804 }, 00:27:36.804 "method": "bdev_nvme_attach_controller" 00:27:36.804 },{ 00:27:36.804 "params": { 00:27:36.804 "name": "Nvme2", 00:27:36.804 "trtype": "tcp", 00:27:36.804 "traddr": "10.0.0.2", 00:27:36.804 "adrfam": "ipv4", 00:27:36.804 "trsvcid": "4420", 00:27:36.804 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:36.804 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:36.804 "hdgst": false, 00:27:36.804 "ddgst": false 00:27:36.804 }, 00:27:36.804 "method": "bdev_nvme_attach_controller" 00:27:36.804 },{ 00:27:36.804 "params": { 00:27:36.804 "name": "Nvme3", 00:27:36.804 "trtype": "tcp", 00:27:36.804 "traddr": "10.0.0.2", 00:27:36.804 "adrfam": "ipv4", 00:27:36.804 "trsvcid": "4420", 00:27:36.804 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:36.804 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:36.804 "hdgst": false, 00:27:36.804 "ddgst": false 00:27:36.804 }, 00:27:36.804 "method": "bdev_nvme_attach_controller" 00:27:36.804 },{ 00:27:36.804 "params": { 00:27:36.804 "name": "Nvme4", 00:27:36.804 "trtype": "tcp", 00:27:36.804 "traddr": "10.0.0.2", 00:27:36.804 "adrfam": "ipv4", 00:27:36.804 "trsvcid": "4420", 00:27:36.804 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:36.804 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:36.804 "hdgst": false, 00:27:36.804 "ddgst": false 00:27:36.804 }, 00:27:36.804 "method": "bdev_nvme_attach_controller" 00:27:36.804 },{ 00:27:36.804 "params": { 00:27:36.804 "name": "Nvme5", 00:27:36.804 "trtype": "tcp", 00:27:36.804 "traddr": "10.0.0.2", 00:27:36.804 "adrfam": "ipv4", 00:27:36.804 "trsvcid": "4420", 00:27:36.804 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:36.804 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:36.804 "hdgst": false, 00:27:36.804 "ddgst": false 00:27:36.804 }, 00:27:36.804 "method": "bdev_nvme_attach_controller" 00:27:36.804 },{ 00:27:36.804 "params": { 00:27:36.804 "name": "Nvme6", 00:27:36.804 "trtype": "tcp", 00:27:36.804 "traddr": "10.0.0.2", 00:27:36.804 "adrfam": "ipv4", 00:27:36.804 "trsvcid": "4420", 00:27:36.804 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:36.804 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:36.804 "hdgst": false, 00:27:36.804 "ddgst": false 00:27:36.804 }, 00:27:36.804 "method": "bdev_nvme_attach_controller" 00:27:36.804 },{ 00:27:36.804 "params": { 00:27:36.804 "name": "Nvme7", 00:27:36.804 "trtype": "tcp", 00:27:36.804 "traddr": "10.0.0.2", 00:27:36.804 "adrfam": "ipv4", 00:27:36.804 "trsvcid": "4420", 00:27:36.804 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:36.804 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:36.804 "hdgst": false, 00:27:36.804 "ddgst": false 00:27:36.804 }, 00:27:36.804 "method": "bdev_nvme_attach_controller" 00:27:36.804 },{ 00:27:36.804 "params": { 00:27:36.804 "name": "Nvme8", 00:27:36.804 "trtype": "tcp", 00:27:36.804 "traddr": "10.0.0.2", 00:27:36.804 "adrfam": "ipv4", 00:27:36.804 "trsvcid": "4420", 00:27:36.804 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:36.804 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:36.804 "hdgst": false, 00:27:36.804 "ddgst": false 00:27:36.804 }, 00:27:36.804 "method": "bdev_nvme_attach_controller" 00:27:36.804 },{ 00:27:36.804 "params": { 00:27:36.804 "name": "Nvme9", 00:27:36.804 "trtype": "tcp", 00:27:36.804 "traddr": "10.0.0.2", 00:27:36.804 "adrfam": "ipv4", 00:27:36.804 "trsvcid": "4420", 00:27:36.804 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:36.804 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:36.804 "hdgst": false, 00:27:36.804 "ddgst": false 00:27:36.804 }, 00:27:36.804 "method": "bdev_nvme_attach_controller" 00:27:36.804 },{ 00:27:36.804 "params": { 00:27:36.804 "name": "Nvme10", 00:27:36.804 "trtype": "tcp", 00:27:36.804 "traddr": "10.0.0.2", 00:27:36.804 "adrfam": "ipv4", 00:27:36.804 "trsvcid": "4420", 00:27:36.804 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:36.804 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:36.804 "hdgst": false, 00:27:36.804 "ddgst": false 00:27:36.804 }, 00:27:36.804 "method": "bdev_nvme_attach_controller" 00:27:36.804 }' 00:27:36.804 [2024-10-28 15:23:23.534841] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:27:36.804 [2024-10-28 15:23:23.534930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3246111 ] 00:27:36.804 [2024-10-28 15:23:23.617495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.062 [2024-10-28 15:23:23.680367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.960 Running I/O for 10 seconds... 00:27:38.960 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:38.960 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:27:38.960 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:38.960 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.960 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:39.217 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.217 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:39.217 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:39.217 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:27:39.217 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:27:39.217 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:27:39.217 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:27:39.217 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:39.217 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:39.217 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:39.217 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.217 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:39.217 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.217 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:27:39.217 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:27:39.217 15:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:27:39.474 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:27:39.474 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:39.474 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:39.475 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:39.475 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.475 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:39.475 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.475 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=78 00:27:39.475 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 78 -ge 100 ']' 00:27:39.475 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:27:39.732 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:27:39.732 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:39.732 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:39.732 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:39.732 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.732 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:39.732 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.732 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:27:39.732 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:27:39.732 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:27:39.732 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:27:39.732 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:27:39.732 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3246111 00:27:39.732 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3246111 ']' 00:27:39.732 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3246111 00:27:39.732 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:27:39.732 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:39.732 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3246111 00:27:39.732 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:39.732 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:39.732 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3246111' 00:27:39.732 killing process with pid 3246111 00:27:39.732 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3246111 00:27:39.732 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3246111 00:27:39.990 1809.00 IOPS, 113.06 MiB/s [2024-10-28T14:23:26.857Z] Received shutdown signal, test time was about 1.059398 seconds 00:27:39.990 00:27:39.990 Latency(us) 00:27:39.990 [2024-10-28T14:23:26.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:39.990 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.990 Verification LBA range: start 0x0 length 0x400 00:27:39.990 Nvme1n1 : 1.06 242.65 15.17 0.00 0.00 260732.40 18155.90 271853.04 00:27:39.990 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.990 Verification LBA range: start 0x0 length 0x400 00:27:39.990 Nvme2n1 : 1.05 244.51 15.28 0.00 0.00 254004.53 19903.53 268746.15 00:27:39.990 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.990 Verification LBA range: start 0x0 length 0x400 00:27:39.990 Nvme3n1 : 1.04 246.06 15.38 0.00 0.00 247445.62 19029.71 250104.79 00:27:39.990 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.990 Verification LBA range: start 0x0 length 0x400 00:27:39.990 Nvme4n1 : 1.05 243.89 15.24 0.00 0.00 244952.56 33981.63 250104.79 00:27:39.990 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.990 Verification LBA range: start 0x0 length 0x400 00:27:39.990 Nvme5n1 : 0.99 198.00 12.38 0.00 0.00 291471.21 3689.43 296708.17 00:27:39.990 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.990 Verification LBA range: start 0x0 length 0x400 00:27:39.990 Nvme6n1 : 1.02 191.69 11.98 0.00 0.00 292669.01 19612.25 268746.15 00:27:39.990 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.990 Verification LBA range: start 0x0 length 0x400 00:27:39.990 Nvme7n1 : 1.06 241.83 15.11 0.00 0.00 232076.89 16699.54 259425.47 00:27:39.990 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.990 Verification LBA range: start 0x0 length 0x400 00:27:39.990 Nvme8n1 : 1.00 196.58 12.29 0.00 0.00 275548.82 4077.80 268746.15 00:27:39.990 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.990 Verification LBA range: start 0x0 length 0x400 00:27:39.990 Nvme9n1 : 1.03 186.30 11.64 0.00 0.00 289056.74 21262.79 276513.37 00:27:39.990 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.990 Verification LBA range: start 0x0 length 0x400 00:27:39.990 Nvme10n1 : 1.04 188.20 11.76 0.00 0.00 279019.64 5704.06 292047.83 00:27:39.990 [2024-10-28T14:23:26.857Z] =================================================================================================================== 00:27:39.990 [2024-10-28T14:23:26.857Z] Total : 2179.71 136.23 0.00 0.00 264167.89 3689.43 296708.17 00:27:40.247 15:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:27:41.180 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3245934 00:27:41.180 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:27:41.180 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:41.180 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:41.180 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:41.180 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:41.180 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:41.180 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:27:41.180 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:41.180 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:27:41.180 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:41.180 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:41.180 rmmod nvme_tcp 00:27:41.180 rmmod nvme_fabrics 00:27:41.180 rmmod nvme_keyring 00:27:41.180 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:41.180 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:27:41.180 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:27:41.180 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3245934 ']' 00:27:41.180 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3245934 00:27:41.180 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3245934 ']' 00:27:41.180 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3245934 00:27:41.180 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:27:41.180 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:41.180 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3245934 00:27:41.180 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:41.180 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:41.180 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3245934' 00:27:41.180 killing process with pid 3245934 00:27:41.180 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3245934 00:27:41.180 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3245934 00:27:42.117 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:42.117 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:42.117 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:42.117 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:27:42.117 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:27:42.117 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:42.117 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:27:42.117 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:42.117 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:42.117 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.117 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:42.117 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:44.028 00:27:44.028 real 0m8.595s 00:27:44.028 user 0m26.791s 00:27:44.028 sys 0m1.854s 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.028 ************************************ 00:27:44.028 END TEST nvmf_shutdown_tc2 00:27:44.028 ************************************ 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:44.028 ************************************ 00:27:44.028 START TEST nvmf_shutdown_tc3 00:27:44.028 ************************************ 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:44.028 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:44.028 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.028 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:44.029 Found net devices under 0000:84:00.0: cvl_0_0 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:44.029 Found net devices under 0000:84:00.1: cvl_0_1 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:44.029 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:44.287 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:44.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:44.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:27:44.287 00:27:44.287 --- 10.0.0.2 ping statistics --- 00:27:44.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.287 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:27:44.287 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:44.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:44.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:27:44.287 00:27:44.287 --- 10.0.0.1 ping statistics --- 00:27:44.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.287 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:27:44.287 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:44.287 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:27:44.287 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:44.287 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:44.287 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:44.287 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:44.287 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:44.287 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:44.287 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:44.287 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:44.287 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:44.287 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:44.287 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:44.287 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3247098 00:27:44.287 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3247098 00:27:44.287 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:44.287 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3247098 ']' 00:27:44.287 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:44.287 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:44.287 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:44.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:44.287 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:44.287 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:44.287 [2024-10-28 15:23:31.047457] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:27:44.287 [2024-10-28 15:23:31.047625] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:44.544 [2024-10-28 15:23:31.183784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:44.544 [2024-10-28 15:23:31.250582] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:44.544 [2024-10-28 15:23:31.250644] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:44.544 [2024-10-28 15:23:31.250673] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:44.544 [2024-10-28 15:23:31.250688] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:44.544 [2024-10-28 15:23:31.250700] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:44.544 [2024-10-28 15:23:31.252581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:44.544 [2024-10-28 15:23:31.252611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:44.544 [2024-10-28 15:23:31.252677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:44.544 [2024-10-28 15:23:31.252682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:44.544 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:44.544 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:27:44.544 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:44.544 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:44.544 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:44.802 [2024-10-28 15:23:31.417043] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.802 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:44.802 Malloc1 00:27:44.802 [2024-10-28 15:23:31.518671] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:44.802 Malloc2 00:27:44.802 Malloc3 00:27:44.802 Malloc4 00:27:45.059 Malloc5 00:27:45.059 Malloc6 00:27:45.059 Malloc7 00:27:45.059 Malloc8 00:27:45.059 Malloc9 00:27:45.317 Malloc10 00:27:45.317 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.317 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:45.317 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:45.317 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:45.317 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3247211 00:27:45.317 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3247211 /var/tmp/bdevperf.sock 00:27:45.317 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3247211 ']' 00:27:45.317 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:45.317 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:45.317 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:45.317 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:45.317 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:27:45.317 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:45.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:45.317 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:27:45.317 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:45.317 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:45.317 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:45.317 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:45.317 { 00:27:45.317 "params": { 00:27:45.317 "name": "Nvme$subsystem", 00:27:45.317 "trtype": "$TEST_TRANSPORT", 00:27:45.317 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.317 "adrfam": "ipv4", 00:27:45.317 "trsvcid": "$NVMF_PORT", 00:27:45.317 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.317 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.317 "hdgst": ${hdgst:-false}, 00:27:45.317 "ddgst": ${ddgst:-false} 00:27:45.317 }, 00:27:45.317 "method": "bdev_nvme_attach_controller" 00:27:45.317 } 00:27:45.317 EOF 00:27:45.317 )") 00:27:45.317 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:45.317 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:45.317 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:45.317 { 00:27:45.317 "params": { 00:27:45.317 "name": "Nvme$subsystem", 00:27:45.317 "trtype": "$TEST_TRANSPORT", 00:27:45.317 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.317 "adrfam": "ipv4", 00:27:45.317 "trsvcid": "$NVMF_PORT", 00:27:45.317 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.317 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.317 "hdgst": ${hdgst:-false}, 00:27:45.317 "ddgst": ${ddgst:-false} 00:27:45.317 }, 00:27:45.317 "method": "bdev_nvme_attach_controller" 00:27:45.317 } 00:27:45.317 EOF 00:27:45.317 )") 00:27:45.317 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:45.317 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:45.317 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:45.317 { 00:27:45.317 "params": { 00:27:45.317 "name": "Nvme$subsystem", 00:27:45.317 "trtype": "$TEST_TRANSPORT", 00:27:45.317 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.317 "adrfam": "ipv4", 00:27:45.317 "trsvcid": "$NVMF_PORT", 00:27:45.317 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.317 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.317 "hdgst": ${hdgst:-false}, 00:27:45.317 "ddgst": ${ddgst:-false} 00:27:45.317 }, 00:27:45.317 "method": "bdev_nvme_attach_controller" 00:27:45.317 } 00:27:45.317 EOF 00:27:45.317 )") 00:27:45.317 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:45.317 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:45.317 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:45.317 { 00:27:45.317 "params": { 00:27:45.317 "name": "Nvme$subsystem", 00:27:45.317 "trtype": "$TEST_TRANSPORT", 00:27:45.317 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.317 "adrfam": "ipv4", 00:27:45.317 "trsvcid": "$NVMF_PORT", 00:27:45.317 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.317 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.317 "hdgst": ${hdgst:-false}, 00:27:45.317 "ddgst": ${ddgst:-false} 00:27:45.317 }, 00:27:45.317 "method": "bdev_nvme_attach_controller" 00:27:45.317 } 00:27:45.317 EOF 00:27:45.317 )") 00:27:45.317 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:45.318 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:45.318 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:45.318 { 00:27:45.318 "params": { 00:27:45.318 "name": "Nvme$subsystem", 00:27:45.318 "trtype": "$TEST_TRANSPORT", 00:27:45.318 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.318 "adrfam": "ipv4", 00:27:45.318 "trsvcid": "$NVMF_PORT", 00:27:45.318 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.318 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.318 "hdgst": ${hdgst:-false}, 00:27:45.318 "ddgst": ${ddgst:-false} 00:27:45.318 }, 00:27:45.318 "method": "bdev_nvme_attach_controller" 00:27:45.318 } 00:27:45.318 EOF 00:27:45.318 )") 00:27:45.318 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:45.318 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:45.318 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:45.318 { 00:27:45.318 "params": { 00:27:45.318 "name": "Nvme$subsystem", 00:27:45.318 "trtype": "$TEST_TRANSPORT", 00:27:45.318 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.318 "adrfam": "ipv4", 00:27:45.318 "trsvcid": "$NVMF_PORT", 00:27:45.318 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.318 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.318 "hdgst": ${hdgst:-false}, 00:27:45.318 "ddgst": ${ddgst:-false} 00:27:45.318 }, 00:27:45.318 "method": "bdev_nvme_attach_controller" 00:27:45.318 } 00:27:45.318 EOF 00:27:45.318 )") 00:27:45.318 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:45.318 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:45.318 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:45.318 { 00:27:45.318 "params": { 00:27:45.318 "name": "Nvme$subsystem", 00:27:45.318 "trtype": "$TEST_TRANSPORT", 00:27:45.318 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.318 "adrfam": "ipv4", 00:27:45.318 "trsvcid": "$NVMF_PORT", 00:27:45.318 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.318 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.318 "hdgst": ${hdgst:-false}, 00:27:45.318 "ddgst": ${ddgst:-false} 00:27:45.318 }, 00:27:45.318 "method": "bdev_nvme_attach_controller" 00:27:45.318 } 00:27:45.318 EOF 00:27:45.318 )") 00:27:45.318 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:45.318 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:45.318 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:45.318 { 00:27:45.318 "params": { 00:27:45.318 "name": "Nvme$subsystem", 00:27:45.318 "trtype": "$TEST_TRANSPORT", 00:27:45.318 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.318 "adrfam": "ipv4", 00:27:45.318 "trsvcid": "$NVMF_PORT", 00:27:45.318 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.318 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.318 "hdgst": ${hdgst:-false}, 00:27:45.318 "ddgst": ${ddgst:-false} 00:27:45.318 }, 00:27:45.318 "method": "bdev_nvme_attach_controller" 00:27:45.318 } 00:27:45.318 EOF 00:27:45.318 )") 00:27:45.318 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:45.318 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:45.318 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:45.318 { 00:27:45.318 "params": { 00:27:45.318 "name": "Nvme$subsystem", 00:27:45.318 "trtype": "$TEST_TRANSPORT", 00:27:45.318 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.318 "adrfam": "ipv4", 00:27:45.318 "trsvcid": "$NVMF_PORT", 00:27:45.318 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.318 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.318 "hdgst": ${hdgst:-false}, 00:27:45.318 "ddgst": ${ddgst:-false} 00:27:45.318 }, 00:27:45.318 "method": "bdev_nvme_attach_controller" 00:27:45.318 } 00:27:45.318 EOF 00:27:45.318 )") 00:27:45.318 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:45.318 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:45.318 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:45.318 { 00:27:45.318 "params": { 00:27:45.318 "name": "Nvme$subsystem", 00:27:45.318 "trtype": "$TEST_TRANSPORT", 00:27:45.318 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.318 "adrfam": "ipv4", 00:27:45.318 "trsvcid": "$NVMF_PORT", 00:27:45.318 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.318 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.318 "hdgst": ${hdgst:-false}, 00:27:45.318 "ddgst": ${ddgst:-false} 00:27:45.318 }, 00:27:45.318 "method": "bdev_nvme_attach_controller" 00:27:45.318 } 00:27:45.318 EOF 00:27:45.318 )") 00:27:45.318 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:45.318 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:27:45.318 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:27:45.318 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:45.318 "params": { 00:27:45.318 "name": "Nvme1", 00:27:45.318 "trtype": "tcp", 00:27:45.318 "traddr": "10.0.0.2", 00:27:45.318 "adrfam": "ipv4", 00:27:45.318 "trsvcid": "4420", 00:27:45.318 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:45.318 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:45.318 "hdgst": false, 00:27:45.318 "ddgst": false 00:27:45.318 }, 00:27:45.318 "method": "bdev_nvme_attach_controller" 00:27:45.318 },{ 00:27:45.318 "params": { 00:27:45.318 "name": "Nvme2", 00:27:45.318 "trtype": "tcp", 00:27:45.318 "traddr": "10.0.0.2", 00:27:45.318 "adrfam": "ipv4", 00:27:45.318 "trsvcid": "4420", 00:27:45.318 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:45.318 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:45.318 "hdgst": false, 00:27:45.318 "ddgst": false 00:27:45.318 }, 00:27:45.318 "method": "bdev_nvme_attach_controller" 00:27:45.318 },{ 00:27:45.318 "params": { 00:27:45.318 "name": "Nvme3", 00:27:45.318 "trtype": "tcp", 00:27:45.318 "traddr": "10.0.0.2", 00:27:45.318 "adrfam": "ipv4", 00:27:45.318 "trsvcid": "4420", 00:27:45.318 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:45.318 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:45.318 "hdgst": false, 00:27:45.318 "ddgst": false 00:27:45.318 }, 00:27:45.318 "method": "bdev_nvme_attach_controller" 00:27:45.318 },{ 00:27:45.318 "params": { 00:27:45.318 "name": "Nvme4", 00:27:45.318 "trtype": "tcp", 00:27:45.318 "traddr": "10.0.0.2", 00:27:45.318 "adrfam": "ipv4", 00:27:45.318 "trsvcid": "4420", 00:27:45.318 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:45.318 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:45.318 "hdgst": false, 00:27:45.318 "ddgst": false 00:27:45.318 }, 00:27:45.318 "method": "bdev_nvme_attach_controller" 00:27:45.318 },{ 00:27:45.318 "params": { 00:27:45.318 "name": "Nvme5", 00:27:45.318 "trtype": "tcp", 00:27:45.318 "traddr": "10.0.0.2", 00:27:45.318 "adrfam": "ipv4", 00:27:45.318 "trsvcid": "4420", 00:27:45.318 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:45.318 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:45.318 "hdgst": false, 00:27:45.318 "ddgst": false 00:27:45.318 }, 00:27:45.318 "method": "bdev_nvme_attach_controller" 00:27:45.318 },{ 00:27:45.318 "params": { 00:27:45.318 "name": "Nvme6", 00:27:45.318 "trtype": "tcp", 00:27:45.318 "traddr": "10.0.0.2", 00:27:45.318 "adrfam": "ipv4", 00:27:45.318 "trsvcid": "4420", 00:27:45.318 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:45.318 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:45.318 "hdgst": false, 00:27:45.318 "ddgst": false 00:27:45.318 }, 00:27:45.318 "method": "bdev_nvme_attach_controller" 00:27:45.318 },{ 00:27:45.318 "params": { 00:27:45.318 "name": "Nvme7", 00:27:45.318 "trtype": "tcp", 00:27:45.318 "traddr": "10.0.0.2", 00:27:45.318 "adrfam": "ipv4", 00:27:45.318 "trsvcid": "4420", 00:27:45.318 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:45.318 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:45.318 "hdgst": false, 00:27:45.318 "ddgst": false 00:27:45.318 }, 00:27:45.318 "method": "bdev_nvme_attach_controller" 00:27:45.318 },{ 00:27:45.318 "params": { 00:27:45.318 "name": "Nvme8", 00:27:45.318 "trtype": "tcp", 00:27:45.318 "traddr": "10.0.0.2", 00:27:45.318 "adrfam": "ipv4", 00:27:45.318 "trsvcid": "4420", 00:27:45.318 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:45.318 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:45.318 "hdgst": false, 00:27:45.318 "ddgst": false 00:27:45.318 }, 00:27:45.318 "method": "bdev_nvme_attach_controller" 00:27:45.318 },{ 00:27:45.318 "params": { 00:27:45.318 "name": "Nvme9", 00:27:45.318 "trtype": "tcp", 00:27:45.318 "traddr": "10.0.0.2", 00:27:45.318 "adrfam": "ipv4", 00:27:45.318 "trsvcid": "4420", 00:27:45.318 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:45.318 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:45.318 "hdgst": false, 00:27:45.318 "ddgst": false 00:27:45.318 }, 00:27:45.318 "method": "bdev_nvme_attach_controller" 00:27:45.318 },{ 00:27:45.318 "params": { 00:27:45.318 "name": "Nvme10", 00:27:45.318 "trtype": "tcp", 00:27:45.318 "traddr": "10.0.0.2", 00:27:45.318 "adrfam": "ipv4", 00:27:45.318 "trsvcid": "4420", 00:27:45.318 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:45.318 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:45.318 "hdgst": false, 00:27:45.318 "ddgst": false 00:27:45.318 }, 00:27:45.318 "method": "bdev_nvme_attach_controller" 00:27:45.318 }' 00:27:45.318 [2024-10-28 15:23:32.062416] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:27:45.318 [2024-10-28 15:23:32.062506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3247211 ] 00:27:45.318 [2024-10-28 15:23:32.139226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.575 [2024-10-28 15:23:32.200240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.592 Running I/O for 10 seconds... 00:27:47.592 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:47.592 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:27:47.592 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:47.592 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.592 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:47.592 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.592 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:47.592 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:47.592 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:47.592 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:27:47.592 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:27:47.592 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:27:47.592 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:27:47.592 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:47.592 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:47.592 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:47.593 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.593 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:47.593 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.593 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=13 00:27:47.593 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 13 -ge 100 ']' 00:27:47.593 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:27:47.884 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:27:47.884 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:47.884 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:47.884 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:47.884 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.884 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:47.884 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.884 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=79 00:27:47.884 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 79 -ge 100 ']' 00:27:47.884 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:27:48.157 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:27:48.157 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:48.157 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:48.157 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.157 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:48.157 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:48.157 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.157 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:27:48.157 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:27:48.157 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:27:48.157 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:27:48.157 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:27:48.157 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3247098 00:27:48.157 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3247098 ']' 00:27:48.157 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3247098 00:27:48.157 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:27:48.157 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:48.157 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3247098 00:27:48.157 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:48.157 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:48.158 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3247098' 00:27:48.158 killing process with pid 3247098 00:27:48.158 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 3247098 00:27:48.158 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 3247098 00:27:48.158 [2024-10-28 15:23:34.883666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.883752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.883769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.883782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.883794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.883806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.883818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.883841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.883853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.883866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.883877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.883890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.883903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.883916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.883928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.883946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.883958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.883970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.883982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.883994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.884547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2870 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.885967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.886010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.886027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.886039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.886052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.886064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.886076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.886088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.886101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.886113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.886125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.886137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.886149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.886161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.886176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.886187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.886200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.886212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.886225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.886237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.886248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.158 [2024-10-28 15:23:34.886260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.886781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab450 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.159 [2024-10-28 15:23:34.888695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.888708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.888720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.888732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.888744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.888756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.888767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.888780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.888793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.888805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.888817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.888829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.888841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.888853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.888865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.888877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2d40 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.890991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.891003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.891015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.891027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.891038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.891050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.891062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.891074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.891085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.891098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.891110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.891123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.891135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.891150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3210 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.892226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.892259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.892274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.892288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.892300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.892313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.892332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.892344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.160 [2024-10-28 15:23:34.892356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.892993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.893016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.893028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.893040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.893053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.893065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.893077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d3700 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.895372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4570 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.161 [2024-10-28 15:23:34.896506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.896518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.896530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.896542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.896555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.896567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.896579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.896591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.896603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.896615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.896627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.896646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.896676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.896690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.896702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.896714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.896726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.896738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.896750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.896761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.896773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.896785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.896797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.896809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.896821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.896833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.896846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.896858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.896871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.896882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.896894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21634a0 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.897609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.897645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.897667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.897680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.897691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.897703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.897716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.897727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.897745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.897757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.897769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.897780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.897792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.897803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.897815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.897827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.897839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.897851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.897862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.897874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.897886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.897897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.897909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.897921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.897940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.897951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.897963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.897974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.897986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.898003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.898015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.898027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.898040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.898052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.898063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.898079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.898091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.898103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.898115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.898126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.898139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.898151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.898163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.162 [2024-10-28 15:23:34.898175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.163 [2024-10-28 15:23:34.898187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.163 [2024-10-28 15:23:34.898199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.163 [2024-10-28 15:23:34.898210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.163 [2024-10-28 15:23:34.898223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.163 [2024-10-28 15:23:34.898235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.163 [2024-10-28 15:23:34.898247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.163 [2024-10-28 15:23:34.898258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.163 [2024-10-28 15:23:34.898270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.163 [2024-10-28 15:23:34.898282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.163 [2024-10-28 15:23:34.898294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.163 [2024-10-28 15:23:34.898305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.163 [2024-10-28 15:23:34.898317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.163 [2024-10-28 15:23:34.898329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.163 [2024-10-28 15:23:34.898342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.163 [2024-10-28 15:23:34.898353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.163 [2024-10-28 15:23:34.898365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.163 [2024-10-28 15:23:34.898377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.163 [2024-10-28 15:23:34.898388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.163 [2024-10-28 15:23:34.898403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163970 is same with the state(6) to be set 00:27:48.163 [2024-10-28 15:23:34.904768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.904835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.904864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.904881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.904898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.904912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.904928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.904950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.904966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.904980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.904996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.905010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.905026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.905040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.905056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.905070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.905086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.905099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.905115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.905129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.905145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.905158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.905174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.905188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.905218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.905234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.905250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.905263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.905279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.905294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.905310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.905323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.905340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.905353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.905369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.905383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.905398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.905412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.905428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.905441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.905457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.905471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.905486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.905501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.905517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.905531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.905546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.905560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.905576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.905594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.905611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.905625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.905657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.905673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.905690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.905704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.905720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.905734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.905750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.905764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.905780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.905793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.905809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.163 [2024-10-28 15:23:34.905823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.163 [2024-10-28 15:23:34.905838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.905852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.905868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.905881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.905897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.905910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.905926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.905944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.905960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.905974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.905994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.906009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.906024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.906038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.906054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.906070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.906087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.906101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.906118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.906131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.906147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.906161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.906177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.906191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.906207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.906221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.906237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.906250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.906266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.906280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.906296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.906310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.906326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.906340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.906356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.906374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.906391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.906404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.906420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.906435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.906451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.906465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.906481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.906496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.906512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.906527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.906543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.906556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.906572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.906586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.906603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.906617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.906632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.906647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.906678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.906693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.906709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.906723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.906738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.906752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.906772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.906787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.906802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.164 [2024-10-28 15:23:34.906816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.906871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.164 [2024-10-28 15:23:34.908130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.164 [2024-10-28 15:23:34.908155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.908172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.164 [2024-10-28 15:23:34.908186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.908200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.164 [2024-10-28 15:23:34.908214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.908228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.164 [2024-10-28 15:23:34.908241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.908255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1720870 is same with the state(6) to be set 00:27:48.164 [2024-10-28 15:23:34.908309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.164 [2024-10-28 15:23:34.908329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.908345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.164 [2024-10-28 15:23:34.908359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.908374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.164 [2024-10-28 15:23:34.908386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.908400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.164 [2024-10-28 15:23:34.908413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.164 [2024-10-28 15:23:34.908426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688110 is same with the state(6) to be set 00:27:48.164 [2024-10-28 15:23:34.908470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.164 [2024-10-28 15:23:34.908490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.908511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.165 [2024-10-28 15:23:34.908525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.908538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.165 [2024-10-28 15:23:34.908552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.908566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.165 [2024-10-28 15:23:34.908578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.908590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b72270 is same with the state(6) to be set 00:27:48.165 [2024-10-28 15:23:34.908638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.165 [2024-10-28 15:23:34.908666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.908681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.165 [2024-10-28 15:23:34.908694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.908708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.165 [2024-10-28 15:23:34.908721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.908734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.165 [2024-10-28 15:23:34.908747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.908759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b72590 is same with the state(6) to be set 00:27:48.165 [2024-10-28 15:23:34.908809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.165 [2024-10-28 15:23:34.908829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.908845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.165 [2024-10-28 15:23:34.908857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.908871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.165 [2024-10-28 15:23:34.908884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.908898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.165 [2024-10-28 15:23:34.908911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.908924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94ed0 is same with the state(6) to be set 00:27:48.165 [2024-10-28 15:23:34.908983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.165 [2024-10-28 15:23:34.909015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.909031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.165 [2024-10-28 15:23:34.909044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.909058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.165 [2024-10-28 15:23:34.909071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.909084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.165 [2024-10-28 15:23:34.909097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.909110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45800 is same with the state(6) to be set 00:27:48.165 [2024-10-28 15:23:34.909168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.165 [2024-10-28 15:23:34.909189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.909207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.165 [2024-10-28 15:23:34.909220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.909234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.165 [2024-10-28 15:23:34.909247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.909260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.165 [2024-10-28 15:23:34.909273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.909286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1716900 is same with the state(6) to be set 00:27:48.165 [2024-10-28 15:23:34.909343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.165 [2024-10-28 15:23:34.909362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.909377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.165 [2024-10-28 15:23:34.909390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.909406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.165 [2024-10-28 15:23:34.909419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.909433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.165 [2024-10-28 15:23:34.909446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.909463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717890 is same with the state(6) to be set 00:27:48.165 [2024-10-28 15:23:34.909510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.165 [2024-10-28 15:23:34.909530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.909545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.165 [2024-10-28 15:23:34.909558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.909572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.165 [2024-10-28 15:23:34.909585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.909598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.165 [2024-10-28 15:23:34.909611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.909624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1720cf0 is same with the state(6) to be set 00:27:48.165 [2024-10-28 15:23:34.909677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.165 [2024-10-28 15:23:34.909699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.909713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.165 [2024-10-28 15:23:34.909726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.909740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.165 [2024-10-28 15:23:34.909754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.909767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.165 [2024-10-28 15:23:34.909780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.909792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b990e0 is same with the state(6) to be set 00:27:48.165 [2024-10-28 15:23:34.910244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.165 [2024-10-28 15:23:34.910266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.910290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.165 [2024-10-28 15:23:34.910306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.910322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.165 [2024-10-28 15:23:34.910337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.910353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.165 [2024-10-28 15:23:34.910372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.910388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.165 [2024-10-28 15:23:34.910403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.910419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.165 [2024-10-28 15:23:34.910433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.910450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.165 [2024-10-28 15:23:34.910464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.165 [2024-10-28 15:23:34.910480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.910494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.910510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.910527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.910545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.910560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.910577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.910592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.910608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.910623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.910639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.910663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.910682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.910697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.910713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.910727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.910744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.910758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.910778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.910793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.910809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.910823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.910839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.910853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.910869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.910883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.910899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.910913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.910931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.910944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.910962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.910975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.910991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.911005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.911021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.911035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.911052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.911066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.911083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.911097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.911113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.911127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.911143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.911161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.911178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.911193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.911208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.911222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.911238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.911253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.911269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.911282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.911299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.911313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.911329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.911343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.911359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.911375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.911391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.911406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.911422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.911436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.911451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.911465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.911481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.911495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.911513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.911527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.911547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.911562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.166 [2024-10-28 15:23:34.911578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.166 [2024-10-28 15:23:34.911592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.911608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.911622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.911638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.911660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.911679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.911693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.911709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.911723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.911740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.911754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.911770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.911785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.911800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.911814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.911831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.911845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.911861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.911875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.911891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.911905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.911921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.911943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.911961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.911975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.911991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.912005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.912021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.912035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.912051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.912065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.912081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.912095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.912111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.912125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.912140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.912154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.912170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.912183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.912199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.912213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.912228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.912242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.912256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b24490 is same with the state(6) to be set 00:27:48.167 [2024-10-28 15:23:34.913508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.913533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.913554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.913575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.913592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.913606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.913622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.913636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.913659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.913676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.913692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.913707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.913722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.913736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.913753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.913768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.913784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.913798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.913813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.913827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.913843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.913856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.913872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.913885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.913901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.913915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.913930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.913944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.913964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.913978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.913994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.914008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.914023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.914037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.914052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.914066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.914082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.914096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.167 [2024-10-28 15:23:34.914112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.167 [2024-10-28 15:23:34.914125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.914141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.914155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.914171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.914184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.914200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.914213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.914229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.914243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.914258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.914272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.914287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.914301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.914316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.914333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.914350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.914364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.914379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.914393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.914408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.914422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.914439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.914452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.914468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.914482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.914498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.914512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.914527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.914541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.914557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.914571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.914587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.914600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.914616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.914630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.914646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.914668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.914684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.914698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.914718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.914733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.914750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.914764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.914779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.914793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.914809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.914823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.914838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.914852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.914867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.914881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.914896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.914910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.914926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.914940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.914955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.914969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.914986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.914999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.915014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.915028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.915045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.915060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.915076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.915093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.915109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.915124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.915140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.915154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.915170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.915184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.915199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.915213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.915229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.915243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.915259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.915273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.915289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.915302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.915318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.915331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.915347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.915361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.168 [2024-10-28 15:23:34.915377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.168 [2024-10-28 15:23:34.915391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.169 [2024-10-28 15:23:34.915406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.169 [2024-10-28 15:23:34.915420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.169 [2024-10-28 15:23:34.915436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.169 [2024-10-28 15:23:34.915450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.169 [2024-10-28 15:23:34.915875] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:27:48.169 [2024-10-28 15:23:34.915918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1688110 (9): Bad file descriptor 00:27:48.169 [2024-10-28 15:23:34.934942] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:27:48.169 [2024-10-28 15:23:34.935066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b45800 (9): Bad file descriptor 00:27:48.169 [2024-10-28 15:23:34.935132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1720870 (9): Bad file descriptor 00:27:48.169 [2024-10-28 15:23:34.935168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b72270 (9): Bad file descriptor 00:27:48.169 [2024-10-28 15:23:34.935202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b72590 (9): Bad file descriptor 00:27:48.169 [2024-10-28 15:23:34.935246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b94ed0 (9): Bad file descriptor 00:27:48.169 [2024-10-28 15:23:34.935275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1716900 (9): Bad file descriptor 00:27:48.169 [2024-10-28 15:23:34.935314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1717890 (9): Bad file descriptor 00:27:48.169 [2024-10-28 15:23:34.935343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1720cf0 (9): Bad file descriptor 00:27:48.169 [2024-10-28 15:23:34.935372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b990e0 (9): Bad file descriptor 00:27:48.169 task offset: 29440 on job bdev=Nvme6n1 fails 00:27:48.169 1667.15 IOPS, 104.20 MiB/s [2024-10-28T14:23:35.036Z] [2024-10-28 15:23:34.936411] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:27:48.169 [2024-10-28 15:23:34.936582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.169 [2024-10-28 15:23:34.936617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1688110 with addr=10.0.0.2, port=4420 00:27:48.169 [2024-10-28 15:23:34.936637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688110 is same with the state(6) to be set 00:27:48.169 [2024-10-28 15:23:34.937844] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:48.169 [2024-10-28 15:23:34.937939] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:48.169 [2024-10-28 15:23:34.938026] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:48.169 [2024-10-28 15:23:34.938172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.169 [2024-10-28 15:23:34.938201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b45800 with addr=10.0.0.2, port=4420 00:27:48.169 [2024-10-28 15:23:34.938217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45800 is same with the state(6) to be set 00:27:48.169 [2024-10-28 15:23:34.938335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.169 [2024-10-28 15:23:34.938362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b94ed0 with addr=10.0.0.2, port=4420 00:27:48.169 [2024-10-28 15:23:34.938378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94ed0 is same with the state(6) to be set 00:27:48.169 [2024-10-28 15:23:34.938397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1688110 (9): Bad file descriptor 00:27:48.169 [2024-10-28 15:23:34.938491] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:48.169 [2024-10-28 15:23:34.938573] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:48.169 [2024-10-28 15:23:34.938678] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:48.169 [2024-10-28 15:23:34.938784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b45800 (9): Bad file descriptor 00:27:48.169 [2024-10-28 15:23:34.938827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b94ed0 (9): Bad file descriptor 00:27:48.169 [2024-10-28 15:23:34.938845] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:27:48.169 [2024-10-28 15:23:34.938859] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:27:48.169 [2024-10-28 15:23:34.938875] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:27:48.169 [2024-10-28 15:23:34.938926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.169 [2024-10-28 15:23:34.938948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.169 [2024-10-28 15:23:34.938971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.169 [2024-10-28 15:23:34.938987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.169 [2024-10-28 15:23:34.939003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.169 [2024-10-28 15:23:34.939018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.169 [2024-10-28 15:23:34.939034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.169 [2024-10-28 15:23:34.939048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.169 [2024-10-28 15:23:34.939064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.169 [2024-10-28 15:23:34.939078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.169 [2024-10-28 15:23:34.939103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.169 [2024-10-28 15:23:34.939118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.169 [2024-10-28 15:23:34.939134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.169 [2024-10-28 15:23:34.939147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.169 [2024-10-28 15:23:34.939166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.169 [2024-10-28 15:23:34.939180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.169 [2024-10-28 15:23:34.939196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.169 [2024-10-28 15:23:34.939209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.169 [2024-10-28 15:23:34.939225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.169 [2024-10-28 15:23:34.939238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.169 [2024-10-28 15:23:34.939254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.169 [2024-10-28 15:23:34.939272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.169 [2024-10-28 15:23:34.939289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.169 [2024-10-28 15:23:34.939303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.169 [2024-10-28 15:23:34.939319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.169 [2024-10-28 15:23:34.939333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.169 [2024-10-28 15:23:34.939348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.169 [2024-10-28 15:23:34.939362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.169 [2024-10-28 15:23:34.939378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.169 [2024-10-28 15:23:34.939391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.169 [2024-10-28 15:23:34.939407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.169 [2024-10-28 15:23:34.939421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.169 [2024-10-28 15:23:34.939438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.169 [2024-10-28 15:23:34.939452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.169 [2024-10-28 15:23:34.939468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.169 [2024-10-28 15:23:34.939482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.169 [2024-10-28 15:23:34.939497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.169 [2024-10-28 15:23:34.939511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.169 [2024-10-28 15:23:34.939527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.169 [2024-10-28 15:23:34.939541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.169 [2024-10-28 15:23:34.939556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.169 [2024-10-28 15:23:34.939570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.169 [2024-10-28 15:23:34.939586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.169 [2024-10-28 15:23:34.939600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.169 [2024-10-28 15:23:34.939616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.169 [2024-10-28 15:23:34.939629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.169 [2024-10-28 15:23:34.939659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.169 [2024-10-28 15:23:34.939676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.169 [2024-10-28 15:23:34.939693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.939706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.939722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.939735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.939750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.939764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.939780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.939793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.939809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.939823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.939839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.939853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.939868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.939882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.939897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.939911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.939938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.939952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.939968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.939981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.939997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.940011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.940027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.940044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.940061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.940074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.940090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.940104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.940119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.940132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.940148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.940161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.940177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.940190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.940206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.940219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.940235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.940249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.940264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.940278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.940293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.940307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.940323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.940337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.940352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.940366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.940381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.940396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.940411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.940428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.940445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.940458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.940474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.940488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.940504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.940518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.940533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.940547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.940562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.940576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.940592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.940605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.940621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.940635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.940656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.940672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.940705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.940722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.940738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.940752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.940767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.940781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.940796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.170 [2024-10-28 15:23:34.940810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.170 [2024-10-28 15:23:34.940829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.940843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.940858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.940872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.940887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.940902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.940917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1924d60 is same with the state(6) to be set 00:27:48.171 [2024-10-28 15:23:34.941114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.941139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.941170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.941186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.941203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.941217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.941233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.941247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.941263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.941277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.941293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.941307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.941323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.941337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.941353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.941368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.941384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.941398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.941427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.941441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.941458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.941480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.941496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.941510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.941526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.941540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.941556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.941570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.941587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.941600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.941617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.941640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.941665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.941681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.941698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.941712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.941729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.941743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.941759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.941773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.941789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.941804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.941820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.941838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.941855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.941869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.941885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.941900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.941916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.941931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.941947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.941961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.941977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.941991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.942008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.942022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.942038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.942052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.942068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.942081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.942097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.942111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.942128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.942143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.942159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.942173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.942189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.942204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.942228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.942244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.942260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.942274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.942290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.942304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.942320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.171 [2024-10-28 15:23:34.942334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.171 [2024-10-28 15:23:34.942350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.942364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.942380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.942394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.942409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.942423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.942438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.942452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.942468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.942482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.942497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.942520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.942536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.942549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.942565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.942578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.942594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.942612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.942629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.942643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.942666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.942681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.942697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.942710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.942726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.942740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.942756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.942769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.942785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.942799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.942814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.942828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.942844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.942857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.942873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.942887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.942903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.942916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.942932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.942946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.942961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.942975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.942995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.943010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.943026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.943039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.943055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.943068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.943084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.943098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.943114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.943128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.943143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b15c10 is same with the state(6) to be set 00:27:48.172 [2024-10-28 15:23:34.943291] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:27:48.172 [2024-10-28 15:23:34.943329] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:27:48.172 [2024-10-28 15:23:34.943345] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:27:48.172 [2024-10-28 15:23:34.943360] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:27:48.172 [2024-10-28 15:23:34.943399] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:27:48.172 [2024-10-28 15:23:34.943413] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:27:48.172 [2024-10-28 15:23:34.943426] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:27:48.172 [2024-10-28 15:23:34.945814] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:27:48.172 [2024-10-28 15:23:34.945842] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:27:48.172 [2024-10-28 15:23:34.945865] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:27:48.172 [2024-10-28 15:23:34.945888] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:27:48.172 [2024-10-28 15:23:34.946233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.172 [2024-10-28 15:23:34.946264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1720cf0 with addr=10.0.0.2, port=4420 00:27:48.172 [2024-10-28 15:23:34.946282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1720cf0 is same with the state(6) to be set 00:27:48.172 [2024-10-28 15:23:34.946462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.172 [2024-10-28 15:23:34.946500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1720870 with addr=10.0.0.2, port=4420 00:27:48.172 [2024-10-28 15:23:34.946527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1720870 is same with the state(6) to be set 00:27:48.172 [2024-10-28 15:23:34.946947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.946971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.947006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.947025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.947042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.947056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.947072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.947087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.947102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.947116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.947134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.947158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.947178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.947193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.172 [2024-10-28 15:23:34.947214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.172 [2024-10-28 15:23:34.947232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.947248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.947263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.947279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.947292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.947309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.947322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.947338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.947351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.947378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.947404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.947422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.947435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.947462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.947480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.947496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.947510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.947527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.947550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.947569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.947583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.947599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.947612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.947628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.947641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.947665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.947684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.947708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.947723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.947738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.947752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.947768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.947786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.947802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.947816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.947837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.947851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.947866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.947879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.947894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.947908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.947923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.947946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.947969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.947983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.947999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.948012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.948031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.948049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.948066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.948080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.948096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.948110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.948138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.948156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.948173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.948187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.948208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.948227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.948243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.948262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.948280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.948293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.948310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.948324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.948339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.948355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.948382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.948400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.948416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.948430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.948447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.948460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.948476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.948490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.948505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.948526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.948547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.948561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.948576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.948590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.173 [2024-10-28 15:23:34.948608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.173 [2024-10-28 15:23:34.948625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.948643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.948665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.948686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.948701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.948717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.948731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.948746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.948764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.948792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.948808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.948824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.948839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.948856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.948880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.948899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.948914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.948929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.948948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.948968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.948983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.948999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.949013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.949032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.949050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.949067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.949081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.949097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.949119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.949143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.949158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.949174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1926030 is same with the state(6) to be set 00:27:48.174 [2024-10-28 15:23:34.950741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.950766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.950788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.950803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.950820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.950834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.950850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.950864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.950880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.950894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.950910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.950923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.950939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.950953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.950969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.950982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.950997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.951011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.951027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.951040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.951056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.951075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.951092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.951105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.951121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.951135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.951150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.951164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.951179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.951193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.951208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.951222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.951237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.951250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.951266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.951280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.951296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.951310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.951326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.951340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.951355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.174 [2024-10-28 15:23:34.951369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.174 [2024-10-28 15:23:34.951384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.951398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.951414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.951427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.951447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.951461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.951477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.951490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.951505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.951519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.951535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.951548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.951564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.951577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.951592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.951606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.951621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.951634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.951657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.951674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.951690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.951703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.951719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.951733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.951748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.951763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.951778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.951792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.951808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.951825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.951842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.951856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.951872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.951885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.951901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.951914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.951930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.951944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.951959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.951972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.951988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.952002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.952017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.952031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.952047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.952061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.952077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.952090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.952106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.952119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.952135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.952151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.952167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.952181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.952201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.952215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.952233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.952247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.952263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.952276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.952292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.952306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.952323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.952337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.952353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.952367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.952383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.952397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.952413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.952427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.952443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.952457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.952473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.952487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.952503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.952517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.952533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.952547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.952562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.952580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.952596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.952611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.175 [2024-10-28 15:23:34.952627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.175 [2024-10-28 15:23:34.952641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.952663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.952679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.952693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b22f60 is same with the state(6) to be set 00:27:48.176 [2024-10-28 15:23:34.953927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.953951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.953972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.953987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.954975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.954991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.955005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.955021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.955034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.955050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.955063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.955079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.955093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.955110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.955123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.955139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.176 [2024-10-28 15:23:34.955152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.176 [2024-10-28 15:23:34.955168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.955182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.955198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.955212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.955227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.955241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.955257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.955270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.955286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.955299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.955316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.955329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.955349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.955363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.955379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.955393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.955410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.955423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.955439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.955452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.955468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.955482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.955498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.955511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.955527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.955540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.955556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.955570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.955586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.955600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.955615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.955629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.955645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.955667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.955684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.955698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.955714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.955732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.955749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.955763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.955779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.955792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.955808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.955822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.955837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.955851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.955866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.955881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.955896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28420 is same with the state(6) to be set 00:27:48.177 [2024-10-28 15:23:34.957142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.957165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.957189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.957204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.957221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.957235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.957251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.957265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.957281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.957295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.957311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.957324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.957341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.957354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.957375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.957389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.957405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.957418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.957435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.957448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.957464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.957478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.957494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.957507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.957523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.957537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.957553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.957567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.957583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.957597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.957614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.957627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.957643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.177 [2024-10-28 15:23:34.957665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.177 [2024-10-28 15:23:34.957682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.957696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.957711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.957725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.957741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.957759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.957775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.957789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.957804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.957818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.957834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.957848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.957863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.957877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.957892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.957905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.957921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.957935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.957951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.957964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.957980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.957993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.958023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.958057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.958087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.958117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.958151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.958182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.958212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.958242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.958272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.958303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.958333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.958363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.958393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.958423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.958453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.958483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.958518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.958549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.958579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.958610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.958641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.958683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.958714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.958744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.958774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.958804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.958835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.958864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.958895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.958929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.178 [2024-10-28 15:23:34.958959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.178 [2024-10-28 15:23:34.958975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.958989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.959005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.959019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.959035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.959048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.959064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.959078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.959096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.959110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.959124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a9c40 is same with the state(6) to be set 00:27:48.179 [2024-10-28 15:23:34.960382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.960405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.960426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.960442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.960458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.960472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.960488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.960502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.960518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.960532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.960554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.960569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.960585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.960599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.960615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.960629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.960646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.960667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.960684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.960698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.960714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.960727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.960744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.960757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.960773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.960788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.960804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.960818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.960834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.960847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.960863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.960877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.960893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.960906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.960921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.960938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.960955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.960969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.960984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.960997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.961013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.961026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.961042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.961055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.961070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.961084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.961100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.961114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.961129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.961142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.961157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.961171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.961187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.961200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.961215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.961229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.179 [2024-10-28 15:23:34.961244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.179 [2024-10-28 15:23:34.961258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.961273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.961287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.961306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.961321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.961337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.961350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.961366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.961379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.961394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.961408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.961424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.961438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.961454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.961467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.961483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.961496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.961512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.961525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.961540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.961553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.961569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.961583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.961598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.961612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.961627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.961641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.961663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.961683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.961700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.961713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.961729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.961743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.961758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.961772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.961787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.961802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.961817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.961831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.961847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.961860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.961876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.961890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.961905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.961919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.961934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.961948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.961963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.961977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.961993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.962007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.962022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.962035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.962055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.962068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.962084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.962098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.962114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.962127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.962143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.962156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.962172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.962186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.962201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.962214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.962229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.962242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.962258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.962272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.962288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.180 [2024-10-28 15:23:34.962302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.180 [2024-10-28 15:23:34.962316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ab150 is same with the state(6) to be set 00:27:48.180 [2024-10-28 15:23:34.963517] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:27:48.180 [2024-10-28 15:23:34.963549] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:27:48.180 [2024-10-28 15:23:34.963569] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:27:48.180 [2024-10-28 15:23:34.963587] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:27:48.180 [2024-10-28 15:23:34.963675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1720cf0 (9): Bad file descriptor 00:27:48.180 [2024-10-28 15:23:34.963703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1720870 (9): Bad file descriptor 00:27:48.180 [2024-10-28 15:23:34.963772] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:27:48.180 [2024-10-28 15:23:34.963805] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:27:48.180 [2024-10-28 15:23:34.963827] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:27:48.180 [2024-10-28 15:23:34.963847] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:27:48.180 [2024-10-28 15:23:34.963956] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:27:48.180 00:27:48.180 Latency(us) 00:27:48.180 [2024-10-28T14:23:35.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:48.180 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:48.180 Job: Nvme1n1 ended in about 1.03 seconds with error 00:27:48.181 Verification LBA range: start 0x0 length 0x400 00:27:48.181 Nvme1n1 : 1.03 186.35 11.65 62.12 0.00 254902.99 6844.87 257872.02 00:27:48.181 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:48.181 Job: Nvme2n1 ended in about 1.04 seconds with error 00:27:48.181 Verification LBA range: start 0x0 length 0x400 00:27:48.181 Nvme2n1 : 1.04 123.56 7.72 61.78 0.00 335443.18 27767.85 274959.93 00:27:48.181 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:48.181 Job: Nvme3n1 ended in about 1.03 seconds with error 00:27:48.181 Verification LBA range: start 0x0 length 0x400 00:27:48.181 Nvme3n1 : 1.03 186.15 11.63 62.05 0.00 245600.90 19418.07 270299.59 00:27:48.181 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:48.181 Job: Nvme4n1 ended in about 1.04 seconds with error 00:27:48.181 Verification LBA range: start 0x0 length 0x400 00:27:48.181 Nvme4n1 : 1.04 187.60 11.72 61.57 0.00 239991.65 19903.53 260978.92 00:27:48.181 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:48.181 Job: Nvme5n1 ended in about 1.01 seconds with error 00:27:48.181 Verification LBA range: start 0x0 length 0x400 00:27:48.181 Nvme5n1 : 1.01 189.88 11.87 63.29 0.00 230960.92 20486.07 268746.15 00:27:48.181 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:48.181 Job: Nvme6n1 ended in about 1.00 seconds with error 00:27:48.181 Verification LBA range: start 0x0 length 0x400 00:27:48.181 Nvme6n1 : 1.00 192.15 12.01 64.05 0.00 223175.49 7912.87 262532.36 00:27:48.181 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:48.181 Job: Nvme7n1 ended in about 1.02 seconds with error 00:27:48.181 Verification LBA range: start 0x0 length 0x400 00:27:48.181 Nvme7n1 : 1.02 189.08 11.82 63.03 0.00 222446.36 21068.61 253211.69 00:27:48.181 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:48.181 Job: Nvme8n1 ended in about 1.04 seconds with error 00:27:48.181 Verification LBA range: start 0x0 length 0x400 00:27:48.181 Nvme8n1 : 1.04 122.76 7.67 61.38 0.00 299537.19 18350.08 276513.37 00:27:48.181 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:48.181 Job: Nvme9n1 ended in about 1.05 seconds with error 00:27:48.181 Verification LBA range: start 0x0 length 0x400 00:27:48.181 Nvme9n1 : 1.05 122.39 7.65 61.19 0.00 294697.84 20971.52 270299.59 00:27:48.181 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:48.181 Job: Nvme10n1 ended in about 1.05 seconds with error 00:27:48.181 Verification LBA range: start 0x0 length 0x400 00:27:48.181 Nvme10n1 : 1.05 122.01 7.63 61.01 0.00 289964.94 19709.35 296708.17 00:27:48.181 [2024-10-28T14:23:35.048Z] =================================================================================================================== 00:27:48.181 [2024-10-28T14:23:35.048Z] Total : 1621.94 101.37 621.47 0.00 259065.24 6844.87 296708.17 00:27:48.181 [2024-10-28 15:23:34.996285] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:48.181 [2024-10-28 15:23:34.996373] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:27:48.181 [2024-10-28 15:23:34.996775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.181 [2024-10-28 15:23:34.996811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1688110 with addr=10.0.0.2, port=4420 00:27:48.181 [2024-10-28 15:23:34.996832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688110 is same with the state(6) to be set 00:27:48.181 [2024-10-28 15:23:34.996956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.181 [2024-10-28 15:23:34.996983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1717890 with addr=10.0.0.2, port=4420 00:27:48.181 [2024-10-28 15:23:34.996999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717890 is same with the state(6) to be set 00:27:48.181 [2024-10-28 15:23:34.997171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.181 [2024-10-28 15:23:34.997197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1716900 with addr=10.0.0.2, port=4420 00:27:48.181 [2024-10-28 15:23:34.997219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1716900 is same with the state(6) to be set 00:27:48.181 [2024-10-28 15:23:34.997416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.181 [2024-10-28 15:23:34.997442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b72270 with addr=10.0.0.2, port=4420 00:27:48.181 [2024-10-28 15:23:34.997458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b72270 is same with the state(6) to be set 00:27:48.181 [2024-10-28 15:23:34.997480] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:27:48.181 [2024-10-28 15:23:34.997494] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:27:48.181 [2024-10-28 15:23:34.997511] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:27:48.181 [2024-10-28 15:23:34.997537] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:27:48.181 [2024-10-28 15:23:34.997551] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:27:48.181 [2024-10-28 15:23:34.997564] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:27:48.181 [2024-10-28 15:23:34.999094] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:27:48.181 [2024-10-28 15:23:34.999124] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:27:48.181 [2024-10-28 15:23:34.999143] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:27:48.181 [2024-10-28 15:23:34.999159] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:27:48.181 [2024-10-28 15:23:34.999448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.181 [2024-10-28 15:23:34.999478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b72590 with addr=10.0.0.2, port=4420 00:27:48.181 [2024-10-28 15:23:34.999494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b72590 is same with the state(6) to be set 00:27:48.181 [2024-10-28 15:23:34.999644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.181 [2024-10-28 15:23:34.999676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b990e0 with addr=10.0.0.2, port=4420 00:27:48.181 [2024-10-28 15:23:34.999692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b990e0 is same with the state(6) to be set 00:27:48.181 [2024-10-28 15:23:34.999721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1688110 (9): Bad file descriptor 00:27:48.181 [2024-10-28 15:23:34.999750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1717890 (9): Bad file descriptor 00:27:48.181 [2024-10-28 15:23:34.999778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1716900 (9): Bad file descriptor 00:27:48.181 [2024-10-28 15:23:34.999795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b72270 (9): Bad file descriptor 00:27:48.181 [2024-10-28 15:23:34.999871] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:27:48.181 [2024-10-28 15:23:34.999907] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:27:48.181 [2024-10-28 15:23:34.999927] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:27:48.181 [2024-10-28 15:23:34.999947] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:27:48.181 [2024-10-28 15:23:35.000533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.181 [2024-10-28 15:23:35.000564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b94ed0 with addr=10.0.0.2, port=4420 00:27:48.181 [2024-10-28 15:23:35.000580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94ed0 is same with the state(6) to be set 00:27:48.181 [2024-10-28 15:23:35.000709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.181 [2024-10-28 15:23:35.000748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b45800 with addr=10.0.0.2, port=4420 00:27:48.181 [2024-10-28 15:23:35.000764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45800 is same with the state(6) to be set 00:27:48.181 [2024-10-28 15:23:35.000782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b72590 (9): Bad file descriptor 00:27:48.181 [2024-10-28 15:23:35.000801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b990e0 (9): Bad file descriptor 00:27:48.181 [2024-10-28 15:23:35.000822] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:27:48.181 [2024-10-28 15:23:35.000834] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:27:48.181 [2024-10-28 15:23:35.000847] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:27:48.181 [2024-10-28 15:23:35.000867] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:27:48.181 [2024-10-28 15:23:35.000881] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:27:48.181 [2024-10-28 15:23:35.000893] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:27:48.181 [2024-10-28 15:23:35.000909] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:27:48.181 [2024-10-28 15:23:35.000922] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:27:48.181 [2024-10-28 15:23:35.000935] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:27:48.181 [2024-10-28 15:23:35.000960] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:27:48.181 [2024-10-28 15:23:35.000973] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:27:48.181 [2024-10-28 15:23:35.000986] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:27:48.181 [2024-10-28 15:23:35.001079] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:27:48.181 [2024-10-28 15:23:35.001109] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:27:48.181 [2024-10-28 15:23:35.001127] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:27:48.181 [2024-10-28 15:23:35.001142] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:27:48.181 [2024-10-28 15:23:35.001155] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:27:48.181 [2024-10-28 15:23:35.001167] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:27:48.181 [2024-10-28 15:23:35.001197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b94ed0 (9): Bad file descriptor 00:27:48.181 [2024-10-28 15:23:35.001218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b45800 (9): Bad file descriptor 00:27:48.181 [2024-10-28 15:23:35.001234] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:27:48.181 [2024-10-28 15:23:35.001246] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:27:48.182 [2024-10-28 15:23:35.001258] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:27:48.182 [2024-10-28 15:23:35.001275] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:27:48.182 [2024-10-28 15:23:35.001288] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:27:48.182 [2024-10-28 15:23:35.001300] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:27:48.182 [2024-10-28 15:23:35.001337] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:27:48.182 [2024-10-28 15:23:35.001356] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:27:48.182 [2024-10-28 15:23:35.001521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.182 [2024-10-28 15:23:35.001547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1720870 with addr=10.0.0.2, port=4420 00:27:48.182 [2024-10-28 15:23:35.001563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1720870 is same with the state(6) to be set 00:27:48.182 [2024-10-28 15:23:35.001730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.182 [2024-10-28 15:23:35.001756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1720cf0 with addr=10.0.0.2, port=4420 00:27:48.182 [2024-10-28 15:23:35.001771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1720cf0 is same with the state(6) to be set 00:27:48.182 [2024-10-28 15:23:35.001785] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:27:48.182 [2024-10-28 15:23:35.001797] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:27:48.182 [2024-10-28 15:23:35.001810] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:27:48.182 [2024-10-28 15:23:35.001827] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:27:48.182 [2024-10-28 15:23:35.001840] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:27:48.182 [2024-10-28 15:23:35.001853] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:27:48.182 [2024-10-28 15:23:35.001896] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:27:48.182 [2024-10-28 15:23:35.001915] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:27:48.182 [2024-10-28 15:23:35.001938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1720870 (9): Bad file descriptor 00:27:48.182 [2024-10-28 15:23:35.001958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1720cf0 (9): Bad file descriptor 00:27:48.182 [2024-10-28 15:23:35.001998] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:27:48.182 [2024-10-28 15:23:35.002016] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:27:48.182 [2024-10-28 15:23:35.002029] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:27:48.182 [2024-10-28 15:23:35.002045] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:27:48.182 [2024-10-28 15:23:35.002059] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:27:48.182 [2024-10-28 15:23:35.002071] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:27:48.182 [2024-10-28 15:23:35.002107] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:27:48.182 [2024-10-28 15:23:35.002125] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:27:48.750 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3247211 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3247211 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 3247211 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:49.686 rmmod nvme_tcp 00:27:49.686 rmmod nvme_fabrics 00:27:49.686 rmmod nvme_keyring 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3247098 ']' 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3247098 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3247098 ']' 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3247098 00:27:49.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3247098) - No such process 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 3247098 is not found' 00:27:49.686 Process with pid 3247098 is not found 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:27:49.686 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:49.947 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:49.947 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:49.947 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.947 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:49.947 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:51.861 00:27:51.861 real 0m7.879s 00:27:51.861 user 0m19.600s 00:27:51.861 sys 0m1.651s 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:51.861 ************************************ 00:27:51.861 END TEST nvmf_shutdown_tc3 00:27:51.861 ************************************ 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:51.861 ************************************ 00:27:51.861 START TEST nvmf_shutdown_tc4 00:27:51.861 ************************************ 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:51.861 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:51.861 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:51.861 Found net devices under 0000:84:00.0: cvl_0_0 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:51.861 Found net devices under 0000:84:00.1: cvl_0_1 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.861 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:51.862 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:27:51.862 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:51.862 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:51.862 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:51.862 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:51.862 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:51.862 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:51.862 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:51.862 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:51.862 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:51.862 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:51.862 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:51.862 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:51.862 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:51.862 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:51.862 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:51.862 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:52.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:52.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:27:52.123 00:27:52.123 --- 10.0.0.2 ping statistics --- 00:27:52.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.123 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:52.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:52.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:27:52.123 00:27:52.123 --- 10.0.0.1 ping statistics --- 00:27:52.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.123 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3248124 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3248124 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 3248124 ']' 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:52.123 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:52.383 [2024-10-28 15:23:39.004518] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:27:52.383 [2024-10-28 15:23:39.004635] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:52.383 [2024-10-28 15:23:39.128783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:52.383 [2024-10-28 15:23:39.242888] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:52.383 [2024-10-28 15:23:39.243014] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:52.383 [2024-10-28 15:23:39.243052] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:52.383 [2024-10-28 15:23:39.243083] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:52.383 [2024-10-28 15:23:39.243109] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:52.383 [2024-10-28 15:23:39.246703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:52.383 [2024-10-28 15:23:39.246811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:52.383 [2024-10-28 15:23:39.246863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:52.383 [2024-10-28 15:23:39.246867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:52.641 [2024-10-28 15:23:39.408882] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.641 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:52.900 Malloc1 00:27:52.900 [2024-10-28 15:23:39.527804] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.900 Malloc2 00:27:52.900 Malloc3 00:27:52.900 Malloc4 00:27:52.900 Malloc5 00:27:52.900 Malloc6 00:27:53.158 Malloc7 00:27:53.158 Malloc8 00:27:53.158 Malloc9 00:27:53.158 Malloc10 00:27:53.158 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.158 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:53.158 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:53.158 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:53.158 15:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3248306 00:27:53.158 15:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:27:53.158 15:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:27:53.415 [2024-10-28 15:23:40.077225] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:58.685 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:58.685 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3248124 00:27:58.685 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 3248124 ']' 00:27:58.685 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 3248124 00:27:58.685 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:27:58.685 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:58.685 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3248124 00:27:58.685 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:58.685 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:58.685 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3248124' 00:27:58.685 killing process with pid 3248124 00:27:58.685 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 3248124 00:27:58.685 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 3248124 00:27:58.685 [2024-10-28 15:23:45.063186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fd8f0 is same with the state(6) to be set 00:27:58.685 [2024-10-28 15:23:45.063280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fd8f0 is same with the state(6) to be set 00:27:58.685 [2024-10-28 15:23:45.063297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fd8f0 is same with the state(6) to be set 00:27:58.685 [2024-10-28 15:23:45.063311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fd8f0 is same with the state(6) to be set 00:27:58.685 [2024-10-28 15:23:45.063324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fd8f0 is same with the state(6) to be set 00:27:58.685 [2024-10-28 15:23:45.064011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191a650 is same with the state(6) to be set 00:27:58.685 [2024-10-28 15:23:45.064047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191a650 is same with the state(6) to be set 00:27:58.685 [2024-10-28 15:23:45.064065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191a650 is same with the state(6) to be set 00:27:58.685 [2024-10-28 15:23:45.064079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191a650 is same with the state(6) to be set 00:27:58.685 [2024-10-28 15:23:45.067717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ffaa0 is same with the state(6) to be set 00:27:58.685 [2024-10-28 15:23:45.067986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fff70 is same with the state(6) to be set 00:27:58.685 [2024-10-28 15:23:45.068017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fff70 is same with the state(6) to be set 00:27:58.685 [2024-10-28 15:23:45.068032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fff70 is same with the state(6) to be set 00:27:58.685 [2024-10-28 15:23:45.068044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fff70 is same with the state(6) to be set 00:27:58.685 [2024-10-28 15:23:45.068057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fff70 is same with the state(6) to be set 00:27:58.685 [2024-10-28 15:23:45.068069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fff70 is same with the state(6) to be set 00:27:58.685 [2024-10-28 15:23:45.068082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fff70 is same with the state(6) to be set 00:27:58.685 [2024-10-28 15:23:45.068094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fff70 is same with the state(6) to be set 00:27:58.685 [2024-10-28 15:23:45.068107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fff70 is same with the state(6) to be set 00:27:58.685 [2024-10-28 15:23:45.068119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fff70 is same with the state(6) to be set 00:27:58.685 [2024-10-28 15:23:45.068431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1900440 is same with the state(6) to be set 00:27:58.685 [2024-10-28 15:23:45.068466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1900440 is same with the state(6) to be set 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 starting I/O failed: -6 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 starting I/O failed: -6 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 [2024-10-28 15:23:45.068764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ff5d0 is same with the state(6) to be set 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 starting I/O failed: -6 00:27:58.685 [2024-10-28 15:23:45.068796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ff5d0 is same with the state(6) to be set 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 starting I/O failed: -6 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 starting I/O failed: -6 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 starting I/O failed: -6 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 starting I/O failed: -6 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 starting I/O failed: -6 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 starting I/O failed: -6 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 starting I/O failed: -6 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 starting I/O failed: -6 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 starting I/O failed: -6 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 starting I/O failed: -6 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 starting I/O failed: -6 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.685 Write completed with error (sct=0, sc=8) 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 [2024-10-28 15:23:45.070318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 [2024-10-28 15:23:45.071656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fe760 is same with Write completed with error (sct=0, sc=8) 00:27:58.686 the state(6) to be set 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 [2024-10-28 15:23:45.071687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fe760 is same with the state(6) to be set 00:27:58.686 starting I/O failed: -6 00:27:58.686 [2024-10-28 15:23:45.071714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fe760 is same with Write completed with error (sct=0, sc=8) 00:27:58.686 the state(6) to be set 00:27:58.686 starting I/O failed: -6 00:27:58.686 [2024-10-28 15:23:45.071729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fe760 is same with the state(6) to be set 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 [2024-10-28 15:23:45.071742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fe760 is same with the state(6) to be set 00:27:58.686 [2024-10-28 15:23:45.071755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fe760 is same with the state(6) to be set 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 [2024-10-28 15:23:45.071767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fe760 is same with the state(6) to be set 00:27:58.686 [2024-10-28 15:23:45.071780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fe760 is same with Write completed with error (sct=0, sc=8) 00:27:58.686 the state(6) to be set 00:27:58.686 starting I/O failed: -6 00:27:58.686 [2024-10-28 15:23:45.071795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fe760 is same with the state(6) to be set 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.686 starting I/O failed: -6 00:27:58.686 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 [2024-10-28 15:23:45.072639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ff100 is same with the state(6) to be set 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 [2024-10-28 15:23:45.072701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ff100 is same with the state(6) to be set 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 [2024-10-28 15:23:45.072719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ff100 is same with the state(6) to be set 00:27:58.687 starting I/O failed: -6 00:27:58.687 [2024-10-28 15:23:45.072734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ff100 is same with the state(6) to be set 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 [2024-10-28 15:23:45.072747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ff100 is same with the state(6) to be set 00:27:58.687 starting I/O failed: -6 00:27:58.687 [2024-10-28 15:23:45.072761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ff100 is same with the state(6) to be set 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 [2024-10-28 15:23:45.072774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ff100 is same with the state(6) to be set 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 [2024-10-28 15:23:45.072787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ff100 is same with the state(6) to be set 00:27:58.687 starting I/O failed: -6 00:27:58.687 [2024-10-28 15:23:45.072801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ff100 is same with Write completed with error (sct=0, sc=8) 00:27:58.687 the state(6) to be set 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 [2024-10-28 15:23:45.073415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:58.687 NVMe io qpair process completion error 00:27:58.687 [2024-10-28 15:23:45.080252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192dbe0 is same with the state(6) to be set 00:27:58.687 [2024-10-28 15:23:45.080304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192dbe0 is same with the state(6) to be set 00:27:58.687 [2024-10-28 15:23:45.080321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192dbe0 is same with the state(6) to be set 00:27:58.687 [2024-10-28 15:23:45.080334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192dbe0 is same with the state(6) to be set 00:27:58.687 [2024-10-28 15:23:45.080356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192dbe0 is same with the state(6) to be set 00:27:58.687 [2024-10-28 15:23:45.080369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192dbe0 is same with the state(6) to be set 00:27:58.687 [2024-10-28 15:23:45.080382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192dbe0 is same with the state(6) to be set 00:27:58.687 [2024-10-28 15:23:45.080394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192dbe0 is same with the state(6) to be set 00:27:58.687 [2024-10-28 15:23:45.080406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192dbe0 is same with the state(6) to be set 00:27:58.687 [2024-10-28 15:23:45.080419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192dbe0 is same with the state(6) to be set 00:27:58.687 [2024-10-28 15:23:45.080432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192dbe0 is same with the state(6) to be set 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 [2024-10-28 15:23:45.081527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 [2024-10-28 15:23:45.082584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 [2024-10-28 15:23:45.082788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdcb0 is same with the state(6) to be set 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 [2024-10-28 15:23:45.082822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdcb0 is same with the state(6) to be set 00:27:58.687 Write completed with error (sct=0, sc=8) 00:27:58.687 starting I/O failed: -6 00:27:58.687 [2024-10-28 15:23:45.082838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdcb0 is same with the state(6) to be set 00:27:58.687 [2024-10-28 15:23:45.082853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdcb0 is same with Write completed with error (sct=0, sc=8) 00:27:58.688 the state(6) to be set 00:27:58.688 starting I/O failed: -6 00:27:58.688 [2024-10-28 15:23:45.082868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdcb0 is same with the state(6) to be set 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 [2024-10-28 15:23:45.082886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdcb0 is same with starting I/O failed: -6 00:27:58.688 the state(6) to be set 00:27:58.688 [2024-10-28 15:23:45.082901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdcb0 is same with the state(6) to be set 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 [2024-10-28 15:23:45.082914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdcb0 is same with the state(6) to be set 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 [2024-10-28 15:23:45.082926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdcb0 is same with the state(6) to be set 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 [2024-10-28 15:23:45.083940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 [2024-10-28 15:23:45.084057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd7e0 is same with the state(6) to be set 00:27:58.688 starting I/O failed: -6 00:27:58.688 [2024-10-28 15:23:45.084090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd7e0 is same with Write completed with error (sct=0, sc=8) 00:27:58.688 the state(6) to be set 00:27:58.688 starting I/O failed: -6 00:27:58.688 [2024-10-28 15:23:45.084108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd7e0 is same with the state(6) to be set 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 [2024-10-28 15:23:45.084122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd7e0 is same with the state(6) to be set 00:27:58.688 starting I/O failed: -6 00:27:58.688 [2024-10-28 15:23:45.084135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd7e0 is same with the state(6) to be set 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 [2024-10-28 15:23:45.084149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd7e0 is same with the state(6) to be set 00:27:58.688 starting I/O failed: -6 00:27:58.688 [2024-10-28 15:23:45.084162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd7e0 is same with the state(6) to be set 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 [2024-10-28 15:23:45.084175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd7e0 is same with the state(6) to be set 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.688 starting I/O failed: -6 00:27:58.688 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 [2024-10-28 15:23:45.085962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:58.689 NVMe io qpair process completion error 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 [2024-10-28 15:23:45.087196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:58.689 starting I/O failed: -6 00:27:58.689 starting I/O failed: -6 00:27:58.689 starting I/O failed: -6 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 [2024-10-28 15:23:45.088407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 Write completed with error (sct=0, sc=8) 00:27:58.689 starting I/O failed: -6 00:27:58.690 [2024-10-28 15:23:45.089736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 [2024-10-28 15:23:45.092170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:58.690 NVMe io qpair process completion error 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 [2024-10-28 15:23:45.094603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.690 Write completed with error (sct=0, sc=8) 00:27:58.690 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 [2024-10-28 15:23:45.096032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 [2024-10-28 15:23:45.098858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:58.691 NVMe io qpair process completion error 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 starting I/O failed: -6 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.691 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 [2024-10-28 15:23:45.100304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 [2024-10-28 15:23:45.101344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 [2024-10-28 15:23:45.102676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.692 Write completed with error (sct=0, sc=8) 00:27:58.692 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 [2024-10-28 15:23:45.106584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:58.693 NVMe io qpair process completion error 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 [2024-10-28 15:23:45.108000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 [2024-10-28 15:23:45.109140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 starting I/O failed: -6 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.693 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 [2024-10-28 15:23:45.110408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 [2024-10-28 15:23:45.113150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:58.694 NVMe io qpair process completion error 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 starting I/O failed: -6 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.694 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.695 Write completed with error (sct=0, sc=8) 00:27:58.695 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 [2024-10-28 15:23:45.120016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 [2024-10-28 15:23:45.121177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 Write completed with error (sct=0, sc=8) 00:27:58.696 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 [2024-10-28 15:23:45.123086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 [2024-10-28 15:23:45.127615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:58.697 NVMe io qpair process completion error 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 [2024-10-28 15:23:45.129172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.697 starting I/O failed: -6 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 starting I/O failed: -6 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.697 Write completed with error (sct=0, sc=8) 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 [2024-10-28 15:23:45.130437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 [2024-10-28 15:23:45.131864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.698 Write completed with error (sct=0, sc=8) 00:27:58.698 starting I/O failed: -6 00:27:58.699 [2024-10-28 15:23:45.135234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:58.699 NVMe io qpair process completion error 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.699 starting I/O failed: -6 00:27:58.699 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Write completed with error (sct=0, sc=8) 00:27:58.700 starting I/O failed: -6 00:27:58.700 Initializing NVMe Controllers 00:27:58.700 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:27:58.700 Controller IO queue size 128, less than required. 00:27:58.700 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:58.700 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:27:58.700 Controller IO queue size 128, less than required. 00:27:58.700 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:58.700 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:27:58.700 Controller IO queue size 128, less than required. 00:27:58.700 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:58.700 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:27:58.700 Controller IO queue size 128, less than required. 00:27:58.700 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:58.700 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:27:58.700 Controller IO queue size 128, less than required. 00:27:58.700 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:58.700 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:27:58.700 Controller IO queue size 128, less than required. 00:27:58.700 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:58.700 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:27:58.700 Controller IO queue size 128, less than required. 00:27:58.700 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:58.700 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:58.700 Controller IO queue size 128, less than required. 00:27:58.700 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:58.700 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:27:58.700 Controller IO queue size 128, less than required. 00:27:58.700 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:58.700 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:27:58.700 Controller IO queue size 128, less than required. 00:27:58.700 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:58.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:27:58.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:27:58.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:27:58.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:27:58.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:27:58.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:27:58.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:27:58.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:58.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:27:58.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:27:58.700 Initialization complete. Launching workers. 00:27:58.700 ======================================================== 00:27:58.700 Latency(us) 00:27:58.700 Device Information : IOPS MiB/s Average min max 00:27:58.701 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1625.62 69.85 78755.69 924.59 152567.37 00:27:58.701 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1698.11 72.97 75422.94 1187.39 136953.30 00:27:58.701 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1695.12 72.84 74600.78 899.85 129460.11 00:27:58.701 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1679.51 72.17 75311.34 850.54 130642.21 00:27:58.701 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1688.92 72.57 74924.93 689.42 131515.49 00:27:58.701 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1688.92 72.57 74961.01 916.60 139028.00 00:27:58.701 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1658.98 71.28 76365.77 1200.99 144232.24 00:27:58.701 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1701.53 73.11 74478.99 875.63 130411.88 00:27:58.701 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1676.94 72.06 75609.41 998.36 149857.48 00:27:58.701 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1680.58 72.21 74537.90 950.27 130808.66 00:27:58.701 ======================================================== 00:27:58.701 Total : 16794.23 721.63 75482.41 689.42 152567.37 00:27:58.701 00:27:58.701 [2024-10-28 15:23:45.146351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1672c50 is same with the state(6) to be set 00:27:58.701 [2024-10-28 15:23:45.146469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16725f0 is same with the state(6) to be set 00:27:58.701 [2024-10-28 15:23:45.146529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16716b0 is same with the state(6) to be set 00:27:58.701 [2024-10-28 15:23:45.146592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1672920 is same with the state(6) to be set 00:27:58.701 [2024-10-28 15:23:45.146659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1673900 is same with the state(6) to be set 00:27:58.701 [2024-10-28 15:23:45.146729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1671d10 is same with the state(6) to be set 00:27:58.701 [2024-10-28 15:23:45.146798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16722c0 is same with the state(6) to be set 00:27:58.701 [2024-10-28 15:23:45.146855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1673720 is same with the state(6) to be set 00:27:58.701 [2024-10-28 15:23:45.146924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16719e0 is same with the state(6) to be set 00:27:58.701 [2024-10-28 15:23:45.146988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1673ae0 is same with the state(6) to be set 00:27:58.701 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:27:58.961 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:27:59.901 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3248306 00:27:59.901 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:27:59.901 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3248306 00:27:59.901 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:27:59.901 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:59.901 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:27:59.901 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:59.901 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 3248306 00:27:59.901 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:27:59.901 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:59.901 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:59.901 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:59.901 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:27:59.901 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:59.901 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:59.901 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:59.901 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:59.901 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:59.901 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:27:59.901 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:59.901 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:27:59.901 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:59.901 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:59.901 rmmod nvme_tcp 00:27:59.901 rmmod nvme_fabrics 00:27:59.901 rmmod nvme_keyring 00:28:00.160 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:00.160 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:28:00.161 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:28:00.161 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3248124 ']' 00:28:00.161 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3248124 00:28:00.161 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 3248124 ']' 00:28:00.161 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 3248124 00:28:00.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3248124) - No such process 00:28:00.161 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 3248124 is not found' 00:28:00.161 Process with pid 3248124 is not found 00:28:00.161 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:00.161 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:00.161 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:00.161 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:28:00.161 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:28:00.161 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:00.161 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:28:00.161 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:00.161 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:00.161 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.161 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:00.161 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.068 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:02.068 00:28:02.068 real 0m10.171s 00:28:02.068 user 0m24.822s 00:28:02.068 sys 0m6.342s 00:28:02.068 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:02.068 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:02.068 ************************************ 00:28:02.068 END TEST nvmf_shutdown_tc4 00:28:02.068 ************************************ 00:28:02.068 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:28:02.068 00:28:02.068 real 0m41.390s 00:28:02.068 user 1m53.039s 00:28:02.068 sys 0m14.191s 00:28:02.068 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:02.068 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:02.068 ************************************ 00:28:02.068 END TEST nvmf_shutdown 00:28:02.068 ************************************ 00:28:02.068 15:23:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:02.327 00:28:02.327 real 15m32.815s 00:28:02.327 user 36m27.778s 00:28:02.327 sys 3m29.734s 00:28:02.327 15:23:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:02.327 15:23:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:02.327 ************************************ 00:28:02.327 END TEST nvmf_target_extra 00:28:02.327 ************************************ 00:28:02.327 15:23:48 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:02.327 15:23:48 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:02.327 15:23:48 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:02.327 15:23:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:02.327 ************************************ 00:28:02.327 START TEST nvmf_host 00:28:02.327 ************************************ 00:28:02.327 15:23:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:02.327 * Looking for test storage... 00:28:02.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:02.327 15:23:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:28:02.327 15:23:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1689 -- # lcov --version 00:28:02.327 15:23:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:28:02.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:02.586 --rc genhtml_branch_coverage=1 00:28:02.586 --rc genhtml_function_coverage=1 00:28:02.586 --rc genhtml_legend=1 00:28:02.586 --rc geninfo_all_blocks=1 00:28:02.586 --rc geninfo_unexecuted_blocks=1 00:28:02.586 00:28:02.586 ' 00:28:02.586 15:23:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:28:02.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:02.587 --rc genhtml_branch_coverage=1 00:28:02.587 --rc genhtml_function_coverage=1 00:28:02.587 --rc genhtml_legend=1 00:28:02.587 --rc geninfo_all_blocks=1 00:28:02.587 --rc geninfo_unexecuted_blocks=1 00:28:02.587 00:28:02.587 ' 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:28:02.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:02.587 --rc genhtml_branch_coverage=1 00:28:02.587 --rc genhtml_function_coverage=1 00:28:02.587 --rc genhtml_legend=1 00:28:02.587 --rc geninfo_all_blocks=1 00:28:02.587 --rc geninfo_unexecuted_blocks=1 00:28:02.587 00:28:02.587 ' 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:28:02.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:02.587 --rc genhtml_branch_coverage=1 00:28:02.587 --rc genhtml_function_coverage=1 00:28:02.587 --rc genhtml_legend=1 00:28:02.587 --rc geninfo_all_blocks=1 00:28:02.587 --rc geninfo_unexecuted_blocks=1 00:28:02.587 00:28:02.587 ' 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:02.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.587 ************************************ 00:28:02.587 START TEST nvmf_multicontroller 00:28:02.587 ************************************ 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:02.587 * Looking for test storage... 00:28:02.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1689 -- # lcov --version 00:28:02.587 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:28:02.846 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:28:02.846 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:28:02.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:02.847 --rc genhtml_branch_coverage=1 00:28:02.847 --rc genhtml_function_coverage=1 00:28:02.847 --rc genhtml_legend=1 00:28:02.847 --rc geninfo_all_blocks=1 00:28:02.847 --rc geninfo_unexecuted_blocks=1 00:28:02.847 00:28:02.847 ' 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:28:02.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:02.847 --rc genhtml_branch_coverage=1 00:28:02.847 --rc genhtml_function_coverage=1 00:28:02.847 --rc genhtml_legend=1 00:28:02.847 --rc geninfo_all_blocks=1 00:28:02.847 --rc geninfo_unexecuted_blocks=1 00:28:02.847 00:28:02.847 ' 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:28:02.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:02.847 --rc genhtml_branch_coverage=1 00:28:02.847 --rc genhtml_function_coverage=1 00:28:02.847 --rc genhtml_legend=1 00:28:02.847 --rc geninfo_all_blocks=1 00:28:02.847 --rc geninfo_unexecuted_blocks=1 00:28:02.847 00:28:02.847 ' 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:28:02.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:02.847 --rc genhtml_branch_coverage=1 00:28:02.847 --rc genhtml_function_coverage=1 00:28:02.847 --rc genhtml_legend=1 00:28:02.847 --rc geninfo_all_blocks=1 00:28:02.847 --rc geninfo_unexecuted_blocks=1 00:28:02.847 00:28:02.847 ' 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:02.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.847 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:02.848 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.848 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:02.848 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:02.848 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:28:02.848 15:23:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:06.137 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:06.137 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:28:06.137 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:06.137 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:06.137 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:06.137 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:06.137 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:06.137 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:28:06.137 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:06.137 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:28:06.137 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:28:06.137 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:28:06.137 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:28:06.137 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:28:06.137 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:28:06.137 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:06.137 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:06.138 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:06.138 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:06.138 Found net devices under 0000:84:00.0: cvl_0_0 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:06.138 Found net devices under 0000:84:00.1: cvl_0_1 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:06.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:06.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:28:06.138 00:28:06.138 --- 10.0.0.2 ping statistics --- 00:28:06.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.138 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:06.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:06.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:28:06.138 00:28:06.138 --- 10.0.0.1 ping statistics --- 00:28:06.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.138 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:06.138 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:06.139 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:06.139 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:06.139 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3251237 00:28:06.139 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:06.139 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3251237 00:28:06.139 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3251237 ']' 00:28:06.139 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.139 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:06.139 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:06.139 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:06.139 15:23:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:06.139 [2024-10-28 15:23:52.735899] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:28:06.139 [2024-10-28 15:23:52.736013] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:06.139 [2024-10-28 15:23:52.881287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:06.139 [2024-10-28 15:23:52.999932] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:06.139 [2024-10-28 15:23:53.000004] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:06.139 [2024-10-28 15:23:53.000025] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:06.139 [2024-10-28 15:23:53.000042] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:06.139 [2024-10-28 15:23:53.000057] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:06.397 [2024-10-28 15:23:53.003032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:06.397 [2024-10-28 15:23:53.003133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:06.397 [2024-10-28 15:23:53.003137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.397 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:06.397 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:28:06.397 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:06.397 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:06.397 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:06.397 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:06.397 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:06.397 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.397 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:06.397 [2024-10-28 15:23:53.247931] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:06.397 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.397 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:06.397 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.397 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:06.656 Malloc0 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:06.656 [2024-10-28 15:23:53.318357] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:06.656 [2024-10-28 15:23:53.326234] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:06.656 Malloc1 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3251380 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3251380 /var/tmp/bdevperf.sock 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3251380 ']' 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:06.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:06.656 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:07.221 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:07.221 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:28:07.221 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:28:07.221 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.221 15:23:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:07.221 NVMe0n1 00:28:07.221 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.221 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:07.221 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:07.221 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.221 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:07.221 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.221 1 00:28:07.221 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:28:07.221 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:28:07.222 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:28:07.222 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:07.222 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:07.222 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:07.222 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:07.222 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:28:07.222 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.222 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:07.222 request: 00:28:07.222 { 00:28:07.222 "name": "NVMe0", 00:28:07.222 "trtype": "tcp", 00:28:07.222 "traddr": "10.0.0.2", 00:28:07.222 "adrfam": "ipv4", 00:28:07.222 "trsvcid": "4420", 00:28:07.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:07.222 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:07.222 "hostaddr": "10.0.0.1", 00:28:07.222 "prchk_reftag": false, 00:28:07.222 "prchk_guard": false, 00:28:07.222 "hdgst": false, 00:28:07.222 "ddgst": false, 00:28:07.222 "allow_unrecognized_csi": false, 00:28:07.222 "method": "bdev_nvme_attach_controller", 00:28:07.222 "req_id": 1 00:28:07.222 } 00:28:07.222 Got JSON-RPC error response 00:28:07.222 response: 00:28:07.222 { 00:28:07.222 "code": -114, 00:28:07.222 "message": "A controller named NVMe0 already exists with the specified network path" 00:28:07.222 } 00:28:07.222 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:07.222 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:28:07.222 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:07.222 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:07.222 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:07.222 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:28:07.222 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:28:07.222 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:28:07.222 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:07.222 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:07.222 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:07.481 request: 00:28:07.481 { 00:28:07.481 "name": "NVMe0", 00:28:07.481 "trtype": "tcp", 00:28:07.481 "traddr": "10.0.0.2", 00:28:07.481 "adrfam": "ipv4", 00:28:07.481 "trsvcid": "4420", 00:28:07.481 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:07.481 "hostaddr": "10.0.0.1", 00:28:07.481 "prchk_reftag": false, 00:28:07.481 "prchk_guard": false, 00:28:07.481 "hdgst": false, 00:28:07.481 "ddgst": false, 00:28:07.481 "allow_unrecognized_csi": false, 00:28:07.481 "method": "bdev_nvme_attach_controller", 00:28:07.481 "req_id": 1 00:28:07.481 } 00:28:07.481 Got JSON-RPC error response 00:28:07.481 response: 00:28:07.481 { 00:28:07.481 "code": -114, 00:28:07.481 "message": "A controller named NVMe0 already exists with the specified network path" 00:28:07.481 } 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:07.481 request: 00:28:07.481 { 00:28:07.481 "name": "NVMe0", 00:28:07.481 "trtype": "tcp", 00:28:07.481 "traddr": "10.0.0.2", 00:28:07.481 "adrfam": "ipv4", 00:28:07.481 "trsvcid": "4420", 00:28:07.481 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:07.481 "hostaddr": "10.0.0.1", 00:28:07.481 "prchk_reftag": false, 00:28:07.481 "prchk_guard": false, 00:28:07.481 "hdgst": false, 00:28:07.481 "ddgst": false, 00:28:07.481 "multipath": "disable", 00:28:07.481 "allow_unrecognized_csi": false, 00:28:07.481 "method": "bdev_nvme_attach_controller", 00:28:07.481 "req_id": 1 00:28:07.481 } 00:28:07.481 Got JSON-RPC error response 00:28:07.481 response: 00:28:07.481 { 00:28:07.481 "code": -114, 00:28:07.481 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:28:07.481 } 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:07.481 request: 00:28:07.481 { 00:28:07.481 "name": "NVMe0", 00:28:07.481 "trtype": "tcp", 00:28:07.481 "traddr": "10.0.0.2", 00:28:07.481 "adrfam": "ipv4", 00:28:07.481 "trsvcid": "4420", 00:28:07.481 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:07.481 "hostaddr": "10.0.0.1", 00:28:07.481 "prchk_reftag": false, 00:28:07.481 "prchk_guard": false, 00:28:07.481 "hdgst": false, 00:28:07.481 "ddgst": false, 00:28:07.481 "multipath": "failover", 00:28:07.481 "allow_unrecognized_csi": false, 00:28:07.481 "method": "bdev_nvme_attach_controller", 00:28:07.481 "req_id": 1 00:28:07.481 } 00:28:07.481 Got JSON-RPC error response 00:28:07.481 response: 00:28:07.481 { 00:28:07.481 "code": -114, 00:28:07.481 "message": "A controller named NVMe0 already exists with the specified network path" 00:28:07.481 } 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:07.481 NVMe0n1 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:07.481 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:07.481 15:23:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:08.854 { 00:28:08.854 "results": [ 00:28:08.854 { 00:28:08.854 "job": "NVMe0n1", 00:28:08.854 "core_mask": "0x1", 00:28:08.854 "workload": "write", 00:28:08.854 "status": "finished", 00:28:08.854 "queue_depth": 128, 00:28:08.854 "io_size": 4096, 00:28:08.854 "runtime": 1.005775, 00:28:08.854 "iops": 18502.15008326912, 00:28:08.854 "mibps": 72.27402376277, 00:28:08.854 "io_failed": 0, 00:28:08.855 "io_timeout": 0, 00:28:08.855 "avg_latency_us": 6907.715852026997, 00:28:08.855 "min_latency_us": 4150.613333333334, 00:28:08.855 "max_latency_us": 12281.931851851852 00:28:08.855 } 00:28:08.855 ], 00:28:08.855 "core_count": 1 00:28:08.855 } 00:28:08.855 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:08.855 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.855 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:08.855 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.855 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:28:08.855 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3251380 00:28:08.855 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3251380 ']' 00:28:08.855 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3251380 00:28:08.855 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:28:08.855 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:08.855 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3251380 00:28:08.855 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:08.855 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:08.855 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3251380' 00:28:08.855 killing process with pid 3251380 00:28:08.855 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3251380 00:28:08.855 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3251380 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1595 -- # read -r file 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1594 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1594 -- # sort -u 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # cat 00:28:09.113 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:09.113 [2024-10-28 15:23:53.440430] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:28:09.113 [2024-10-28 15:23:53.440537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3251380 ] 00:28:09.113 [2024-10-28 15:23:53.522394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.113 [2024-10-28 15:23:53.589311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.113 [2024-10-28 15:23:54.270279] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name d4483c68-896e-49fe-ba27-c1b2c46c4d57 already exists 00:28:09.113 [2024-10-28 15:23:54.270320] bdev.c:7836:bdev_register: *ERROR*: Unable to add uuid:d4483c68-896e-49fe-ba27-c1b2c46c4d57 alias for bdev NVMe1n1 00:28:09.113 [2024-10-28 15:23:54.270336] bdev_nvme.c:4604:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:09.113 Running I/O for 1 seconds... 00:28:09.113 18481.00 IOPS, 72.19 MiB/s 00:28:09.113 Latency(us) 00:28:09.113 [2024-10-28T14:23:55.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.113 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:09.113 NVMe0n1 : 1.01 18502.15 72.27 0.00 0.00 6907.72 4150.61 12281.93 00:28:09.113 [2024-10-28T14:23:55.980Z] =================================================================================================================== 00:28:09.113 [2024-10-28T14:23:55.980Z] Total : 18502.15 72.27 0.00 0.00 6907.72 4150.61 12281.93 00:28:09.113 Received shutdown signal, test time was about 1.000000 seconds 00:28:09.113 00:28:09.113 Latency(us) 00:28:09.113 [2024-10-28T14:23:55.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.113 [2024-10-28T14:23:55.980Z] =================================================================================================================== 00:28:09.113 [2024-10-28T14:23:55.980Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:09.113 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1601 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1595 -- # read -r file 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:09.113 rmmod nvme_tcp 00:28:09.113 rmmod nvme_fabrics 00:28:09.113 rmmod nvme_keyring 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3251237 ']' 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3251237 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3251237 ']' 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3251237 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3251237 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3251237' 00:28:09.113 killing process with pid 3251237 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3251237 00:28:09.113 15:23:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3251237 00:28:09.683 15:23:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:09.683 15:23:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:09.683 15:23:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:09.683 15:23:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:28:09.683 15:23:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:28:09.683 15:23:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:09.683 15:23:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:28:09.683 15:23:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:09.683 15:23:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:09.683 15:23:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.683 15:23:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:09.683 15:23:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.592 15:23:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:11.592 00:28:11.592 real 0m8.972s 00:28:11.592 user 0m13.444s 00:28:11.592 sys 0m3.302s 00:28:11.592 15:23:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:11.592 15:23:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:11.592 ************************************ 00:28:11.592 END TEST nvmf_multicontroller 00:28:11.592 ************************************ 00:28:11.592 15:23:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:11.592 15:23:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:11.593 15:23:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:11.593 15:23:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.593 ************************************ 00:28:11.593 START TEST nvmf_aer 00:28:11.593 ************************************ 00:28:11.593 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:11.854 * Looking for test storage... 00:28:11.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1689 -- # lcov --version 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:28:11.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.854 --rc genhtml_branch_coverage=1 00:28:11.854 --rc genhtml_function_coverage=1 00:28:11.854 --rc genhtml_legend=1 00:28:11.854 --rc geninfo_all_blocks=1 00:28:11.854 --rc geninfo_unexecuted_blocks=1 00:28:11.854 00:28:11.854 ' 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:28:11.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.854 --rc genhtml_branch_coverage=1 00:28:11.854 --rc genhtml_function_coverage=1 00:28:11.854 --rc genhtml_legend=1 00:28:11.854 --rc geninfo_all_blocks=1 00:28:11.854 --rc geninfo_unexecuted_blocks=1 00:28:11.854 00:28:11.854 ' 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:28:11.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.854 --rc genhtml_branch_coverage=1 00:28:11.854 --rc genhtml_function_coverage=1 00:28:11.854 --rc genhtml_legend=1 00:28:11.854 --rc geninfo_all_blocks=1 00:28:11.854 --rc geninfo_unexecuted_blocks=1 00:28:11.854 00:28:11.854 ' 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:28:11.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.854 --rc genhtml_branch_coverage=1 00:28:11.854 --rc genhtml_function_coverage=1 00:28:11.854 --rc genhtml_legend=1 00:28:11.854 --rc geninfo_all_blocks=1 00:28:11.854 --rc geninfo_unexecuted_blocks=1 00:28:11.854 00:28:11.854 ' 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:11.854 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:11.855 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:11.855 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:11.855 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:11.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:11.855 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:11.855 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:11.855 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:11.855 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:11.855 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:11.855 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:11.855 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:11.855 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:11.855 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:11.855 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.855 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:11.855 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.855 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:11.855 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:11.855 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:28:11.855 15:23:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:15.146 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:15.146 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:15.146 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:15.147 Found net devices under 0000:84:00.0: cvl_0_0 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:15.147 Found net devices under 0000:84:00.1: cvl_0_1 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:15.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:15.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:28:15.147 00:28:15.147 --- 10.0.0.2 ping statistics --- 00:28:15.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.147 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:15.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:15.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:28:15.147 00:28:15.147 --- 10.0.0.1 ping statistics --- 00:28:15.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.147 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3253816 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3253816 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 3253816 ']' 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:15.147 15:24:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:15.147 [2024-10-28 15:24:01.748787] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:28:15.147 [2024-10-28 15:24:01.748891] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:15.147 [2024-10-28 15:24:01.937710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:15.407 [2024-10-28 15:24:02.072410] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:15.407 [2024-10-28 15:24:02.072522] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:15.407 [2024-10-28 15:24:02.072562] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:15.407 [2024-10-28 15:24:02.072595] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:15.407 [2024-10-28 15:24:02.072623] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:15.407 [2024-10-28 15:24:02.076310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.407 [2024-10-28 15:24:02.076421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:15.407 [2024-10-28 15:24:02.076529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:15.407 [2024-10-28 15:24:02.076530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.407 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:15.407 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:28:15.407 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:15.407 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:15.407 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:15.666 [2024-10-28 15:24:02.288350] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:15.666 Malloc0 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:15.666 [2024-10-28 15:24:02.361198] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:15.666 [ 00:28:15.666 { 00:28:15.666 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:15.666 "subtype": "Discovery", 00:28:15.666 "listen_addresses": [], 00:28:15.666 "allow_any_host": true, 00:28:15.666 "hosts": [] 00:28:15.666 }, 00:28:15.666 { 00:28:15.666 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:15.666 "subtype": "NVMe", 00:28:15.666 "listen_addresses": [ 00:28:15.666 { 00:28:15.666 "trtype": "TCP", 00:28:15.666 "adrfam": "IPv4", 00:28:15.666 "traddr": "10.0.0.2", 00:28:15.666 "trsvcid": "4420" 00:28:15.666 } 00:28:15.666 ], 00:28:15.666 "allow_any_host": true, 00:28:15.666 "hosts": [], 00:28:15.666 "serial_number": "SPDK00000000000001", 00:28:15.666 "model_number": "SPDK bdev Controller", 00:28:15.666 "max_namespaces": 2, 00:28:15.666 "min_cntlid": 1, 00:28:15.666 "max_cntlid": 65519, 00:28:15.666 "namespaces": [ 00:28:15.666 { 00:28:15.666 "nsid": 1, 00:28:15.666 "bdev_name": "Malloc0", 00:28:15.666 "name": "Malloc0", 00:28:15.666 "nguid": "C97BDED1DFCD4D889DCA4B0BBB011B74", 00:28:15.666 "uuid": "c97bded1-dfcd-4d88-9dca-4b0bbb011b74" 00:28:15.666 } 00:28:15.666 ] 00:28:15.666 } 00:28:15.666 ] 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3253879 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:28:15.666 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:15.924 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:15.924 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:28:15.924 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:28:15.924 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:15.924 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:15.924 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:15.924 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:28:15.924 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:15.924 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.924 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:15.924 Malloc1 00:28:15.924 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.924 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:15.924 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.924 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:15.924 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.924 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:15.924 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.924 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:15.924 [ 00:28:15.924 { 00:28:15.924 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:15.924 "subtype": "Discovery", 00:28:15.924 "listen_addresses": [], 00:28:15.924 "allow_any_host": true, 00:28:15.924 "hosts": [] 00:28:15.924 }, 00:28:15.924 { 00:28:15.924 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:15.924 "subtype": "NVMe", 00:28:15.924 "listen_addresses": [ 00:28:15.924 { 00:28:15.924 "trtype": "TCP", 00:28:15.924 "adrfam": "IPv4", 00:28:15.924 "traddr": "10.0.0.2", 00:28:15.924 "trsvcid": "4420" 00:28:15.924 } 00:28:15.924 ], 00:28:15.924 "allow_any_host": true, 00:28:15.924 "hosts": [], 00:28:15.924 "serial_number": "SPDK00000000000001", 00:28:15.924 "model_number": "SPDK bdev Controller", 00:28:15.924 "max_namespaces": 2, 00:28:15.924 "min_cntlid": 1, 00:28:15.924 "max_cntlid": 65519, 00:28:15.924 "namespaces": [ 00:28:15.924 { 00:28:15.924 "nsid": 1, 00:28:15.925 "bdev_name": "Malloc0", 00:28:15.925 "name": "Malloc0", 00:28:15.925 "nguid": "C97BDED1DFCD4D889DCA4B0BBB011B74", 00:28:15.925 "uuid": "c97bded1-dfcd-4d88-9dca-4b0bbb011b74" 00:28:15.925 }, 00:28:15.925 { 00:28:15.925 "nsid": 2, 00:28:15.925 "bdev_name": "Malloc1", 00:28:15.925 "name": "Malloc1", 00:28:15.925 "nguid": "6D873F22065D461090D8598A7B8DD0BB", 00:28:15.925 "uuid": "6d873f22-065d-4610-90d8-598a7b8dd0bb" 00:28:15.925 } 00:28:15.925 ] 00:28:15.925 } 00:28:15.925 ] 00:28:15.925 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.925 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3253879 00:28:15.925 Asynchronous Event Request test 00:28:15.925 Attaching to 10.0.0.2 00:28:15.925 Attached to 10.0.0.2 00:28:15.925 Registering asynchronous event callbacks... 00:28:15.925 Starting namespace attribute notice tests for all controllers... 00:28:15.925 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:15.925 aer_cb - Changed Namespace 00:28:15.925 Cleaning up... 00:28:15.925 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:15.925 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.925 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:16.183 rmmod nvme_tcp 00:28:16.183 rmmod nvme_fabrics 00:28:16.183 rmmod nvme_keyring 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3253816 ']' 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3253816 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 3253816 ']' 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 3253816 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3253816 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3253816' 00:28:16.183 killing process with pid 3253816 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 3253816 00:28:16.183 15:24:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 3253816 00:28:16.752 15:24:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:16.752 15:24:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:16.752 15:24:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:16.752 15:24:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:28:16.752 15:24:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:28:16.752 15:24:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:16.752 15:24:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:28:16.752 15:24:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:16.752 15:24:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:16.752 15:24:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.752 15:24:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:16.752 15:24:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.663 15:24:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:18.663 00:28:18.663 real 0m6.974s 00:28:18.663 user 0m5.979s 00:28:18.663 sys 0m2.988s 00:28:18.663 15:24:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:18.663 15:24:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:18.663 ************************************ 00:28:18.663 END TEST nvmf_aer 00:28:18.663 ************************************ 00:28:18.663 15:24:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:18.663 15:24:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:18.663 15:24:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:18.663 15:24:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.663 ************************************ 00:28:18.663 START TEST nvmf_async_init 00:28:18.663 ************************************ 00:28:18.663 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:18.922 * Looking for test storage... 00:28:18.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1689 -- # lcov --version 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:28:18.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.922 --rc genhtml_branch_coverage=1 00:28:18.922 --rc genhtml_function_coverage=1 00:28:18.922 --rc genhtml_legend=1 00:28:18.922 --rc geninfo_all_blocks=1 00:28:18.922 --rc geninfo_unexecuted_blocks=1 00:28:18.922 00:28:18.922 ' 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:28:18.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.922 --rc genhtml_branch_coverage=1 00:28:18.922 --rc genhtml_function_coverage=1 00:28:18.922 --rc genhtml_legend=1 00:28:18.922 --rc geninfo_all_blocks=1 00:28:18.922 --rc geninfo_unexecuted_blocks=1 00:28:18.922 00:28:18.922 ' 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:28:18.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.922 --rc genhtml_branch_coverage=1 00:28:18.922 --rc genhtml_function_coverage=1 00:28:18.922 --rc genhtml_legend=1 00:28:18.922 --rc geninfo_all_blocks=1 00:28:18.922 --rc geninfo_unexecuted_blocks=1 00:28:18.922 00:28:18.922 ' 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:28:18.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.922 --rc genhtml_branch_coverage=1 00:28:18.922 --rc genhtml_function_coverage=1 00:28:18.922 --rc genhtml_legend=1 00:28:18.922 --rc geninfo_all_blocks=1 00:28:18.922 --rc geninfo_unexecuted_blocks=1 00:28:18.922 00:28:18.922 ' 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:18.922 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:18.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=32c89d4880514f46818bec77bbdc9105 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:28:18.923 15:24:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:22.213 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:22.213 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:22.213 Found net devices under 0000:84:00.0: cvl_0_0 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:22.213 Found net devices under 0000:84:00.1: cvl_0_1 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:22.213 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:22.214 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:22.214 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.214 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.214 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.214 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:22.214 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:22.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:28:22.214 00:28:22.214 --- 10.0.0.2 ping statistics --- 00:28:22.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.214 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:28:22.214 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:28:22.214 00:28:22.214 --- 10.0.0.1 ping statistics --- 00:28:22.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.214 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:28:22.214 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.214 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:28:22.214 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:22.214 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.214 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:22.214 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:22.214 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.214 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:22.214 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:22.214 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:22.214 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:22.214 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:22.214 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.214 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3256402 00:28:22.214 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:22.214 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3256402 00:28:22.214 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 3256402 ']' 00:28:22.214 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.214 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:22.214 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.214 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:22.214 15:24:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.214 [2024-10-28 15:24:08.983279] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:28:22.214 [2024-10-28 15:24:08.983383] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:22.474 [2024-10-28 15:24:09.104903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.474 [2024-10-28 15:24:09.219388] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:22.474 [2024-10-28 15:24:09.219488] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:22.474 [2024-10-28 15:24:09.219525] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:22.474 [2024-10-28 15:24:09.219556] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:22.474 [2024-10-28 15:24:09.219582] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:22.474 [2024-10-28 15:24:09.220829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.735 [2024-10-28 15:24:09.525083] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.735 null0 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 32c89d4880514f46818bec77bbdc9105 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.735 [2024-10-28 15:24:09.575708] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.735 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.995 nvme0n1 00:28:22.995 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.995 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:22.995 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.995 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.995 [ 00:28:22.995 { 00:28:22.995 "name": "nvme0n1", 00:28:22.995 "aliases": [ 00:28:22.995 "32c89d48-8051-4f46-818b-ec77bbdc9105" 00:28:22.995 ], 00:28:22.995 "product_name": "NVMe disk", 00:28:22.995 "block_size": 512, 00:28:22.995 "num_blocks": 2097152, 00:28:22.995 "uuid": "32c89d48-8051-4f46-818b-ec77bbdc9105", 00:28:22.995 "numa_id": 1, 00:28:22.995 "assigned_rate_limits": { 00:28:22.995 "rw_ios_per_sec": 0, 00:28:22.995 "rw_mbytes_per_sec": 0, 00:28:22.995 "r_mbytes_per_sec": 0, 00:28:22.995 "w_mbytes_per_sec": 0 00:28:22.995 }, 00:28:22.995 "claimed": false, 00:28:22.995 "zoned": false, 00:28:22.995 "supported_io_types": { 00:28:22.995 "read": true, 00:28:22.995 "write": true, 00:28:22.995 "unmap": false, 00:28:22.995 "flush": true, 00:28:22.995 "reset": true, 00:28:22.995 "nvme_admin": true, 00:28:22.995 "nvme_io": true, 00:28:22.995 "nvme_io_md": false, 00:28:22.995 "write_zeroes": true, 00:28:22.995 "zcopy": false, 00:28:22.995 "get_zone_info": false, 00:28:22.995 "zone_management": false, 00:28:22.995 "zone_append": false, 00:28:22.995 "compare": true, 00:28:22.995 "compare_and_write": true, 00:28:22.995 "abort": true, 00:28:22.995 "seek_hole": false, 00:28:22.995 "seek_data": false, 00:28:22.995 "copy": true, 00:28:22.995 "nvme_iov_md": false 00:28:22.995 }, 00:28:22.995 "memory_domains": [ 00:28:22.995 { 00:28:22.995 "dma_device_id": "system", 00:28:22.995 "dma_device_type": 1 00:28:22.995 } 00:28:22.995 ], 00:28:22.995 "driver_specific": { 00:28:22.995 "nvme": [ 00:28:22.995 { 00:28:22.995 "trid": { 00:28:22.995 "trtype": "TCP", 00:28:22.995 "adrfam": "IPv4", 00:28:22.995 "traddr": "10.0.0.2", 00:28:22.995 "trsvcid": "4420", 00:28:22.995 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:22.995 }, 00:28:22.995 "ctrlr_data": { 00:28:22.995 "cntlid": 1, 00:28:22.995 "vendor_id": "0x8086", 00:28:22.995 "model_number": "SPDK bdev Controller", 00:28:22.995 "serial_number": "00000000000000000000", 00:28:22.995 "firmware_revision": "25.01", 00:28:22.995 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:22.995 "oacs": { 00:28:22.995 "security": 0, 00:28:22.995 "format": 0, 00:28:22.995 "firmware": 0, 00:28:22.995 "ns_manage": 0 00:28:22.995 }, 00:28:22.995 "multi_ctrlr": true, 00:28:22.995 "ana_reporting": false 00:28:22.995 }, 00:28:22.995 "vs": { 00:28:22.995 "nvme_version": "1.3" 00:28:22.995 }, 00:28:22.995 "ns_data": { 00:28:22.995 "id": 1, 00:28:22.995 "can_share": true 00:28:22.995 } 00:28:22.995 } 00:28:22.995 ], 00:28:22.995 "mp_policy": "active_passive" 00:28:22.995 } 00:28:22.995 } 00:28:22.995 ] 00:28:22.995 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.995 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:22.995 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.995 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.995 [2024-10-28 15:24:09.848456] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:22.995 [2024-10-28 15:24:09.848688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16714a0 (9): Bad file descriptor 00:28:23.253 [2024-10-28 15:24:09.992029] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:28:23.253 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.253 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:23.253 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.253 15:24:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:23.253 [ 00:28:23.253 { 00:28:23.253 "name": "nvme0n1", 00:28:23.253 "aliases": [ 00:28:23.253 "32c89d48-8051-4f46-818b-ec77bbdc9105" 00:28:23.253 ], 00:28:23.253 "product_name": "NVMe disk", 00:28:23.253 "block_size": 512, 00:28:23.253 "num_blocks": 2097152, 00:28:23.253 "uuid": "32c89d48-8051-4f46-818b-ec77bbdc9105", 00:28:23.253 "numa_id": 1, 00:28:23.253 "assigned_rate_limits": { 00:28:23.253 "rw_ios_per_sec": 0, 00:28:23.253 "rw_mbytes_per_sec": 0, 00:28:23.253 "r_mbytes_per_sec": 0, 00:28:23.253 "w_mbytes_per_sec": 0 00:28:23.253 }, 00:28:23.253 "claimed": false, 00:28:23.253 "zoned": false, 00:28:23.253 "supported_io_types": { 00:28:23.253 "read": true, 00:28:23.253 "write": true, 00:28:23.253 "unmap": false, 00:28:23.253 "flush": true, 00:28:23.253 "reset": true, 00:28:23.253 "nvme_admin": true, 00:28:23.253 "nvme_io": true, 00:28:23.253 "nvme_io_md": false, 00:28:23.253 "write_zeroes": true, 00:28:23.253 "zcopy": false, 00:28:23.253 "get_zone_info": false, 00:28:23.253 "zone_management": false, 00:28:23.253 "zone_append": false, 00:28:23.253 "compare": true, 00:28:23.253 "compare_and_write": true, 00:28:23.253 "abort": true, 00:28:23.253 "seek_hole": false, 00:28:23.253 "seek_data": false, 00:28:23.253 "copy": true, 00:28:23.253 "nvme_iov_md": false 00:28:23.253 }, 00:28:23.253 "memory_domains": [ 00:28:23.253 { 00:28:23.253 "dma_device_id": "system", 00:28:23.253 "dma_device_type": 1 00:28:23.253 } 00:28:23.253 ], 00:28:23.253 "driver_specific": { 00:28:23.253 "nvme": [ 00:28:23.253 { 00:28:23.253 "trid": { 00:28:23.253 "trtype": "TCP", 00:28:23.253 "adrfam": "IPv4", 00:28:23.253 "traddr": "10.0.0.2", 00:28:23.253 "trsvcid": "4420", 00:28:23.253 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:23.253 }, 00:28:23.253 "ctrlr_data": { 00:28:23.253 "cntlid": 2, 00:28:23.253 "vendor_id": "0x8086", 00:28:23.253 "model_number": "SPDK bdev Controller", 00:28:23.253 "serial_number": "00000000000000000000", 00:28:23.253 "firmware_revision": "25.01", 00:28:23.253 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:23.253 "oacs": { 00:28:23.253 "security": 0, 00:28:23.253 "format": 0, 00:28:23.253 "firmware": 0, 00:28:23.253 "ns_manage": 0 00:28:23.253 }, 00:28:23.253 "multi_ctrlr": true, 00:28:23.253 "ana_reporting": false 00:28:23.253 }, 00:28:23.253 "vs": { 00:28:23.253 "nvme_version": "1.3" 00:28:23.253 }, 00:28:23.253 "ns_data": { 00:28:23.253 "id": 1, 00:28:23.253 "can_share": true 00:28:23.253 } 00:28:23.253 } 00:28:23.253 ], 00:28:23.253 "mp_policy": "active_passive" 00:28:23.253 } 00:28:23.253 } 00:28:23.253 ] 00:28:23.253 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.253 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.253 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.253 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:23.253 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.253 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:23.253 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.NJSZuBUMvN 00:28:23.253 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:23.253 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.NJSZuBUMvN 00:28:23.253 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.NJSZuBUMvN 00:28:23.253 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.253 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:23.253 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.253 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:23.253 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.253 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:23.253 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.253 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:23.253 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.253 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:23.253 [2024-10-28 15:24:10.073486] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:23.253 [2024-10-28 15:24:10.073944] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:23.253 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.253 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:28:23.253 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.253 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:23.253 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.253 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:28:23.253 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.253 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:23.253 [2024-10-28 15:24:10.097572] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:23.513 nvme0n1 00:28:23.513 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.513 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:23.513 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.513 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:23.513 [ 00:28:23.513 { 00:28:23.513 "name": "nvme0n1", 00:28:23.513 "aliases": [ 00:28:23.513 "32c89d48-8051-4f46-818b-ec77bbdc9105" 00:28:23.513 ], 00:28:23.513 "product_name": "NVMe disk", 00:28:23.513 "block_size": 512, 00:28:23.513 "num_blocks": 2097152, 00:28:23.513 "uuid": "32c89d48-8051-4f46-818b-ec77bbdc9105", 00:28:23.513 "numa_id": 1, 00:28:23.513 "assigned_rate_limits": { 00:28:23.513 "rw_ios_per_sec": 0, 00:28:23.513 "rw_mbytes_per_sec": 0, 00:28:23.513 "r_mbytes_per_sec": 0, 00:28:23.513 "w_mbytes_per_sec": 0 00:28:23.513 }, 00:28:23.513 "claimed": false, 00:28:23.513 "zoned": false, 00:28:23.513 "supported_io_types": { 00:28:23.513 "read": true, 00:28:23.513 "write": true, 00:28:23.513 "unmap": false, 00:28:23.513 "flush": true, 00:28:23.513 "reset": true, 00:28:23.513 "nvme_admin": true, 00:28:23.513 "nvme_io": true, 00:28:23.513 "nvme_io_md": false, 00:28:23.513 "write_zeroes": true, 00:28:23.513 "zcopy": false, 00:28:23.513 "get_zone_info": false, 00:28:23.513 "zone_management": false, 00:28:23.513 "zone_append": false, 00:28:23.513 "compare": true, 00:28:23.513 "compare_and_write": true, 00:28:23.513 "abort": true, 00:28:23.513 "seek_hole": false, 00:28:23.513 "seek_data": false, 00:28:23.513 "copy": true, 00:28:23.513 "nvme_iov_md": false 00:28:23.513 }, 00:28:23.513 "memory_domains": [ 00:28:23.513 { 00:28:23.513 "dma_device_id": "system", 00:28:23.513 "dma_device_type": 1 00:28:23.513 } 00:28:23.513 ], 00:28:23.513 "driver_specific": { 00:28:23.513 "nvme": [ 00:28:23.513 { 00:28:23.513 "trid": { 00:28:23.513 "trtype": "TCP", 00:28:23.513 "adrfam": "IPv4", 00:28:23.513 "traddr": "10.0.0.2", 00:28:23.513 "trsvcid": "4421", 00:28:23.513 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:23.513 }, 00:28:23.513 "ctrlr_data": { 00:28:23.513 "cntlid": 3, 00:28:23.513 "vendor_id": "0x8086", 00:28:23.513 "model_number": "SPDK bdev Controller", 00:28:23.513 "serial_number": "00000000000000000000", 00:28:23.513 "firmware_revision": "25.01", 00:28:23.513 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:23.513 "oacs": { 00:28:23.513 "security": 0, 00:28:23.513 "format": 0, 00:28:23.513 "firmware": 0, 00:28:23.513 "ns_manage": 0 00:28:23.513 }, 00:28:23.513 "multi_ctrlr": true, 00:28:23.513 "ana_reporting": false 00:28:23.513 }, 00:28:23.513 "vs": { 00:28:23.513 "nvme_version": "1.3" 00:28:23.513 }, 00:28:23.513 "ns_data": { 00:28:23.513 "id": 1, 00:28:23.513 "can_share": true 00:28:23.513 } 00:28:23.513 } 00:28:23.513 ], 00:28:23.513 "mp_policy": "active_passive" 00:28:23.513 } 00:28:23.513 } 00:28:23.513 ] 00:28:23.513 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.513 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.513 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.513 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:23.513 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.513 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.NJSZuBUMvN 00:28:23.513 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:28:23.513 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:28:23.513 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:23.513 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:28:23.513 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:23.513 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:28:23.513 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:23.513 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:23.513 rmmod nvme_tcp 00:28:23.513 rmmod nvme_fabrics 00:28:23.513 rmmod nvme_keyring 00:28:23.513 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:23.513 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:28:23.513 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:28:23.513 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3256402 ']' 00:28:23.513 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3256402 00:28:23.513 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 3256402 ']' 00:28:23.513 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 3256402 00:28:23.513 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:28:23.513 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:23.513 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3256402 00:28:23.513 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:23.513 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:23.513 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3256402' 00:28:23.513 killing process with pid 3256402 00:28:23.514 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 3256402 00:28:23.514 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 3256402 00:28:23.813 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:23.813 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:23.813 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:23.813 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:28:23.813 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:28:23.813 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:23.813 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:28:23.813 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:23.813 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:23.813 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.813 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.813 15:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.405 15:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:26.405 00:28:26.405 real 0m7.230s 00:28:26.405 user 0m3.070s 00:28:26.405 sys 0m2.962s 00:28:26.405 15:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:26.405 15:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:26.405 ************************************ 00:28:26.405 END TEST nvmf_async_init 00:28:26.405 ************************************ 00:28:26.406 15:24:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:26.406 15:24:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:26.406 15:24:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:26.406 15:24:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.406 ************************************ 00:28:26.406 START TEST dma 00:28:26.406 ************************************ 00:28:26.406 15:24:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:26.406 * Looking for test storage... 00:28:26.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:26.406 15:24:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:28:26.406 15:24:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1689 -- # lcov --version 00:28:26.406 15:24:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:28:26.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.406 --rc genhtml_branch_coverage=1 00:28:26.406 --rc genhtml_function_coverage=1 00:28:26.406 --rc genhtml_legend=1 00:28:26.406 --rc geninfo_all_blocks=1 00:28:26.406 --rc geninfo_unexecuted_blocks=1 00:28:26.406 00:28:26.406 ' 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:28:26.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.406 --rc genhtml_branch_coverage=1 00:28:26.406 --rc genhtml_function_coverage=1 00:28:26.406 --rc genhtml_legend=1 00:28:26.406 --rc geninfo_all_blocks=1 00:28:26.406 --rc geninfo_unexecuted_blocks=1 00:28:26.406 00:28:26.406 ' 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:28:26.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.406 --rc genhtml_branch_coverage=1 00:28:26.406 --rc genhtml_function_coverage=1 00:28:26.406 --rc genhtml_legend=1 00:28:26.406 --rc geninfo_all_blocks=1 00:28:26.406 --rc geninfo_unexecuted_blocks=1 00:28:26.406 00:28:26.406 ' 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:28:26.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.406 --rc genhtml_branch_coverage=1 00:28:26.406 --rc genhtml_function_coverage=1 00:28:26.406 --rc genhtml_legend=1 00:28:26.406 --rc geninfo_all_blocks=1 00:28:26.406 --rc geninfo_unexecuted_blocks=1 00:28:26.406 00:28:26.406 ' 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:26.406 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:28:26.406 00:28:26.406 real 0m0.320s 00:28:26.406 user 0m0.242s 00:28:26.406 sys 0m0.089s 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:28:26.406 ************************************ 00:28:26.406 END TEST dma 00:28:26.406 ************************************ 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:26.406 15:24:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:26.407 15:24:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.407 ************************************ 00:28:26.407 START TEST nvmf_identify 00:28:26.407 ************************************ 00:28:26.407 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:26.407 * Looking for test storage... 00:28:26.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:26.407 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:28:26.407 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1689 -- # lcov --version 00:28:26.407 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:28:26.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.666 --rc genhtml_branch_coverage=1 00:28:26.666 --rc genhtml_function_coverage=1 00:28:26.666 --rc genhtml_legend=1 00:28:26.666 --rc geninfo_all_blocks=1 00:28:26.666 --rc geninfo_unexecuted_blocks=1 00:28:26.666 00:28:26.666 ' 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:28:26.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.666 --rc genhtml_branch_coverage=1 00:28:26.666 --rc genhtml_function_coverage=1 00:28:26.666 --rc genhtml_legend=1 00:28:26.666 --rc geninfo_all_blocks=1 00:28:26.666 --rc geninfo_unexecuted_blocks=1 00:28:26.666 00:28:26.666 ' 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:28:26.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.666 --rc genhtml_branch_coverage=1 00:28:26.666 --rc genhtml_function_coverage=1 00:28:26.666 --rc genhtml_legend=1 00:28:26.666 --rc geninfo_all_blocks=1 00:28:26.666 --rc geninfo_unexecuted_blocks=1 00:28:26.666 00:28:26.666 ' 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:28:26.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.666 --rc genhtml_branch_coverage=1 00:28:26.666 --rc genhtml_function_coverage=1 00:28:26.666 --rc genhtml_legend=1 00:28:26.666 --rc geninfo_all_blocks=1 00:28:26.666 --rc geninfo_unexecuted_blocks=1 00:28:26.666 00:28:26.666 ' 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:26.666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:28:26.666 15:24:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:29.961 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:29.961 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:28:29.961 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:29.961 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:29.961 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:29.961 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:29.961 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:29.961 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:28:29.961 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:29.961 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:28:29.961 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:28:29.961 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:28:29.961 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:28:29.961 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:28:29.961 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:28:29.961 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:29.961 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:29.961 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:29.961 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:29.961 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:29.961 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:29.962 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:29.962 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:29.962 Found net devices under 0000:84:00.0: cvl_0_0 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:29.962 Found net devices under 0000:84:00.1: cvl_0_1 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:29.962 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:29.962 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:28:29.962 00:28:29.962 --- 10.0.0.2 ping statistics --- 00:28:29.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.962 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:29.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:29.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:28:29.962 00:28:29.962 --- 10.0.0.1 ping statistics --- 00:28:29.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.962 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3258881 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3258881 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 3258881 ']' 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:29.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:29.962 15:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:29.962 [2024-10-28 15:24:16.612498] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:28:29.962 [2024-10-28 15:24:16.612611] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:29.962 [2024-10-28 15:24:16.746562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:30.222 [2024-10-28 15:24:16.869992] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:30.222 [2024-10-28 15:24:16.870098] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:30.222 [2024-10-28 15:24:16.870137] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:30.222 [2024-10-28 15:24:16.870167] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:30.222 [2024-10-28 15:24:16.870193] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:30.222 [2024-10-28 15:24:16.873726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:30.222 [2024-10-28 15:24:16.873793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:30.222 [2024-10-28 15:24:16.873880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:30.222 [2024-10-28 15:24:16.873883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.222 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:30.222 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:28:30.222 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:30.222 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.222 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:30.222 [2024-10-28 15:24:17.008054] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:30.222 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.222 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:30.222 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:30.222 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:30.222 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:30.222 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.222 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:30.222 Malloc0 00:28:30.222 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.222 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:30.222 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.222 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:30.222 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.222 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:30.222 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.222 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:30.484 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.484 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:30.484 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.484 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:30.484 [2024-10-28 15:24:17.094711] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:30.485 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.485 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:30.485 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.485 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:30.485 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.485 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:30.485 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.485 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:30.485 [ 00:28:30.485 { 00:28:30.485 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:30.485 "subtype": "Discovery", 00:28:30.485 "listen_addresses": [ 00:28:30.485 { 00:28:30.485 "trtype": "TCP", 00:28:30.485 "adrfam": "IPv4", 00:28:30.485 "traddr": "10.0.0.2", 00:28:30.485 "trsvcid": "4420" 00:28:30.485 } 00:28:30.485 ], 00:28:30.485 "allow_any_host": true, 00:28:30.485 "hosts": [] 00:28:30.485 }, 00:28:30.485 { 00:28:30.485 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:30.485 "subtype": "NVMe", 00:28:30.485 "listen_addresses": [ 00:28:30.485 { 00:28:30.485 "trtype": "TCP", 00:28:30.485 "adrfam": "IPv4", 00:28:30.485 "traddr": "10.0.0.2", 00:28:30.485 "trsvcid": "4420" 00:28:30.485 } 00:28:30.485 ], 00:28:30.485 "allow_any_host": true, 00:28:30.485 "hosts": [], 00:28:30.485 "serial_number": "SPDK00000000000001", 00:28:30.485 "model_number": "SPDK bdev Controller", 00:28:30.485 "max_namespaces": 32, 00:28:30.485 "min_cntlid": 1, 00:28:30.485 "max_cntlid": 65519, 00:28:30.485 "namespaces": [ 00:28:30.485 { 00:28:30.485 "nsid": 1, 00:28:30.485 "bdev_name": "Malloc0", 00:28:30.485 "name": "Malloc0", 00:28:30.485 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:30.485 "eui64": "ABCDEF0123456789", 00:28:30.485 "uuid": "22f85353-4f5c-4299-9069-c5bf58cbf97c" 00:28:30.485 } 00:28:30.485 ] 00:28:30.485 } 00:28:30.485 ] 00:28:30.485 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.485 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:30.485 [2024-10-28 15:24:17.135810] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:28:30.485 [2024-10-28 15:24:17.135859] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3259033 ] 00:28:30.485 [2024-10-28 15:24:17.194157] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:28:30.485 [2024-10-28 15:24:17.194237] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:30.485 [2024-10-28 15:24:17.194249] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:30.485 [2024-10-28 15:24:17.194266] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:30.485 [2024-10-28 15:24:17.194282] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:30.485 [2024-10-28 15:24:17.195043] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:28:30.485 [2024-10-28 15:24:17.195102] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x10f2690 0 00:28:30.485 [2024-10-28 15:24:17.204661] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:30.485 [2024-10-28 15:24:17.204687] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:30.485 [2024-10-28 15:24:17.204698] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:30.485 [2024-10-28 15:24:17.204705] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:30.485 [2024-10-28 15:24:17.204751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.485 [2024-10-28 15:24:17.204766] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.485 [2024-10-28 15:24:17.204774] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10f2690) 00:28:30.485 [2024-10-28 15:24:17.204797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:30.485 [2024-10-28 15:24:17.204828] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154100, cid 0, qid 0 00:28:30.485 [2024-10-28 15:24:17.211663] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.485 [2024-10-28 15:24:17.211684] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.485 [2024-10-28 15:24:17.211693] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.485 [2024-10-28 15:24:17.211702] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154100) on tqpair=0x10f2690 00:28:30.485 [2024-10-28 15:24:17.211727] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:30.485 [2024-10-28 15:24:17.211742] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:28:30.485 [2024-10-28 15:24:17.211753] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:28:30.485 [2024-10-28 15:24:17.211783] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.485 [2024-10-28 15:24:17.211794] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.485 [2024-10-28 15:24:17.211801] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10f2690) 00:28:30.485 [2024-10-28 15:24:17.211814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.485 [2024-10-28 15:24:17.211841] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154100, cid 0, qid 0 00:28:30.485 [2024-10-28 15:24:17.212000] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.485 [2024-10-28 15:24:17.212016] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.485 [2024-10-28 15:24:17.212024] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.485 [2024-10-28 15:24:17.212032] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154100) on tqpair=0x10f2690 00:28:30.485 [2024-10-28 15:24:17.212044] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:28:30.485 [2024-10-28 15:24:17.212059] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:28:30.485 [2024-10-28 15:24:17.212072] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.485 [2024-10-28 15:24:17.212081] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.485 [2024-10-28 15:24:17.212088] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10f2690) 00:28:30.485 [2024-10-28 15:24:17.212099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.485 [2024-10-28 15:24:17.212124] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154100, cid 0, qid 0 00:28:30.485 [2024-10-28 15:24:17.212258] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.485 [2024-10-28 15:24:17.212271] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.485 [2024-10-28 15:24:17.212279] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.485 [2024-10-28 15:24:17.212286] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154100) on tqpair=0x10f2690 00:28:30.485 [2024-10-28 15:24:17.212297] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:28:30.485 [2024-10-28 15:24:17.212312] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:28:30.485 [2024-10-28 15:24:17.212325] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.485 [2024-10-28 15:24:17.212334] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.485 [2024-10-28 15:24:17.212341] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10f2690) 00:28:30.485 [2024-10-28 15:24:17.212352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.485 [2024-10-28 15:24:17.212376] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154100, cid 0, qid 0 00:28:30.485 [2024-10-28 15:24:17.212457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.485 [2024-10-28 15:24:17.212470] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.485 [2024-10-28 15:24:17.212478] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.485 [2024-10-28 15:24:17.212485] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154100) on tqpair=0x10f2690 00:28:30.485 [2024-10-28 15:24:17.212496] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:30.485 [2024-10-28 15:24:17.212520] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.485 [2024-10-28 15:24:17.212531] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.485 [2024-10-28 15:24:17.212538] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10f2690) 00:28:30.485 [2024-10-28 15:24:17.212554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.485 [2024-10-28 15:24:17.212579] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154100, cid 0, qid 0 00:28:30.485 [2024-10-28 15:24:17.212676] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.485 [2024-10-28 15:24:17.212693] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.485 [2024-10-28 15:24:17.212700] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.485 [2024-10-28 15:24:17.212708] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154100) on tqpair=0x10f2690 00:28:30.485 [2024-10-28 15:24:17.212718] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:28:30.485 [2024-10-28 15:24:17.212728] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:28:30.485 [2024-10-28 15:24:17.212742] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:30.485 [2024-10-28 15:24:17.212854] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:28:30.485 [2024-10-28 15:24:17.212863] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:30.485 [2024-10-28 15:24:17.212881] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.485 [2024-10-28 15:24:17.212890] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.485 [2024-10-28 15:24:17.212897] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10f2690) 00:28:30.485 [2024-10-28 15:24:17.212908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.486 [2024-10-28 15:24:17.212933] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154100, cid 0, qid 0 00:28:30.486 [2024-10-28 15:24:17.213072] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.486 [2024-10-28 15:24:17.213085] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.486 [2024-10-28 15:24:17.213093] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.486 [2024-10-28 15:24:17.213101] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154100) on tqpair=0x10f2690 00:28:30.486 [2024-10-28 15:24:17.213111] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:30.486 [2024-10-28 15:24:17.213129] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.486 [2024-10-28 15:24:17.213139] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.486 [2024-10-28 15:24:17.213146] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10f2690) 00:28:30.486 [2024-10-28 15:24:17.213157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.486 [2024-10-28 15:24:17.213180] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154100, cid 0, qid 0 00:28:30.486 [2024-10-28 15:24:17.213267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.486 [2024-10-28 15:24:17.213282] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.486 [2024-10-28 15:24:17.213290] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.486 [2024-10-28 15:24:17.213298] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154100) on tqpair=0x10f2690 00:28:30.486 [2024-10-28 15:24:17.213307] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:30.486 [2024-10-28 15:24:17.213316] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:28:30.486 [2024-10-28 15:24:17.213335] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:28:30.486 [2024-10-28 15:24:17.213352] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:28:30.486 [2024-10-28 15:24:17.213371] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.486 [2024-10-28 15:24:17.213380] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10f2690) 00:28:30.486 [2024-10-28 15:24:17.213392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.486 [2024-10-28 15:24:17.213416] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154100, cid 0, qid 0 00:28:30.486 [2024-10-28 15:24:17.213582] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:30.486 [2024-10-28 15:24:17.213596] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:30.486 [2024-10-28 15:24:17.213603] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:30.486 [2024-10-28 15:24:17.213611] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10f2690): datao=0, datal=4096, cccid=0 00:28:30.486 [2024-10-28 15:24:17.213620] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1154100) on tqpair(0x10f2690): expected_datao=0, payload_size=4096 00:28:30.486 [2024-10-28 15:24:17.213629] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.486 [2024-10-28 15:24:17.213648] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:30.486 [2024-10-28 15:24:17.213676] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:30.486 [2024-10-28 15:24:17.213691] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.486 [2024-10-28 15:24:17.213702] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.486 [2024-10-28 15:24:17.213710] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.486 [2024-10-28 15:24:17.213717] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154100) on tqpair=0x10f2690 00:28:30.486 [2024-10-28 15:24:17.213733] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:28:30.486 [2024-10-28 15:24:17.213743] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:28:30.486 [2024-10-28 15:24:17.213751] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:28:30.486 [2024-10-28 15:24:17.213762] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:28:30.486 [2024-10-28 15:24:17.213770] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:28:30.486 [2024-10-28 15:24:17.213779] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:28:30.486 [2024-10-28 15:24:17.213796] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:28:30.486 [2024-10-28 15:24:17.213810] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.486 [2024-10-28 15:24:17.213818] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.486 [2024-10-28 15:24:17.213825] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10f2690) 00:28:30.486 [2024-10-28 15:24:17.213838] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:30.486 [2024-10-28 15:24:17.213862] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154100, cid 0, qid 0 00:28:30.486 [2024-10-28 15:24:17.214001] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.486 [2024-10-28 15:24:17.214016] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.486 [2024-10-28 15:24:17.214031] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.486 [2024-10-28 15:24:17.214039] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154100) on tqpair=0x10f2690 00:28:30.486 [2024-10-28 15:24:17.214058] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.486 [2024-10-28 15:24:17.214067] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.486 [2024-10-28 15:24:17.214075] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10f2690) 00:28:30.486 [2024-10-28 15:24:17.214086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.486 [2024-10-28 15:24:17.214097] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.486 [2024-10-28 15:24:17.214104] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.486 [2024-10-28 15:24:17.214111] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x10f2690) 00:28:30.486 [2024-10-28 15:24:17.214121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.486 [2024-10-28 15:24:17.214132] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.486 [2024-10-28 15:24:17.214139] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.486 [2024-10-28 15:24:17.214146] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x10f2690) 00:28:30.486 [2024-10-28 15:24:17.214156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.486 [2024-10-28 15:24:17.214166] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.486 [2024-10-28 15:24:17.214174] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.486 [2024-10-28 15:24:17.214180] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10f2690) 00:28:30.486 [2024-10-28 15:24:17.214190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.486 [2024-10-28 15:24:17.214200] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:30.486 [2024-10-28 15:24:17.214217] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:30.486 [2024-10-28 15:24:17.214230] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.486 [2024-10-28 15:24:17.214238] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10f2690) 00:28:30.486 [2024-10-28 15:24:17.214249] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.486 [2024-10-28 15:24:17.214274] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154100, cid 0, qid 0 00:28:30.486 [2024-10-28 15:24:17.214286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154280, cid 1, qid 0 00:28:30.486 [2024-10-28 15:24:17.214295] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154400, cid 2, qid 0 00:28:30.486 [2024-10-28 15:24:17.214304] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154580, cid 3, qid 0 00:28:30.486 [2024-10-28 15:24:17.214312] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154700, cid 4, qid 0 00:28:30.486 [2024-10-28 15:24:17.214462] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.486 [2024-10-28 15:24:17.214476] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.486 [2024-10-28 15:24:17.214484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.486 [2024-10-28 15:24:17.214491] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154700) on tqpair=0x10f2690 00:28:30.486 [2024-10-28 15:24:17.214507] nvme_ctrlr.c:3023:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:28:30.486 [2024-10-28 15:24:17.214522] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:28:30.486 [2024-10-28 15:24:17.214543] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.486 [2024-10-28 15:24:17.214553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10f2690) 00:28:30.486 [2024-10-28 15:24:17.214565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.486 [2024-10-28 15:24:17.214588] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154700, cid 4, qid 0 00:28:30.486 [2024-10-28 15:24:17.214757] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:30.486 [2024-10-28 15:24:17.214774] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:30.486 [2024-10-28 15:24:17.214782] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:30.486 [2024-10-28 15:24:17.214789] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10f2690): datao=0, datal=4096, cccid=4 00:28:30.487 [2024-10-28 15:24:17.214797] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1154700) on tqpair(0x10f2690): expected_datao=0, payload_size=4096 00:28:30.487 [2024-10-28 15:24:17.214806] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.487 [2024-10-28 15:24:17.214825] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:30.487 [2024-10-28 15:24:17.214835] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:30.487 [2024-10-28 15:24:17.258664] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.487 [2024-10-28 15:24:17.258684] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.487 [2024-10-28 15:24:17.258692] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.487 [2024-10-28 15:24:17.258700] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154700) on tqpair=0x10f2690 00:28:30.487 [2024-10-28 15:24:17.258722] nvme_ctrlr.c:4166:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:28:30.487 [2024-10-28 15:24:17.258767] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.487 [2024-10-28 15:24:17.258779] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10f2690) 00:28:30.487 [2024-10-28 15:24:17.258792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.487 [2024-10-28 15:24:17.258805] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.487 [2024-10-28 15:24:17.258813] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.487 [2024-10-28 15:24:17.258820] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10f2690) 00:28:30.487 [2024-10-28 15:24:17.258830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.487 [2024-10-28 15:24:17.258863] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154700, cid 4, qid 0 00:28:30.487 [2024-10-28 15:24:17.258876] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154880, cid 5, qid 0 00:28:30.487 [2024-10-28 15:24:17.259048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:30.487 [2024-10-28 15:24:17.259065] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:30.487 [2024-10-28 15:24:17.259073] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:30.487 [2024-10-28 15:24:17.259080] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10f2690): datao=0, datal=1024, cccid=4 00:28:30.487 [2024-10-28 15:24:17.259088] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1154700) on tqpair(0x10f2690): expected_datao=0, payload_size=1024 00:28:30.487 [2024-10-28 15:24:17.259097] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.487 [2024-10-28 15:24:17.259108] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:30.487 [2024-10-28 15:24:17.259116] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:30.487 [2024-10-28 15:24:17.259130] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.487 [2024-10-28 15:24:17.259140] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.487 [2024-10-28 15:24:17.259148] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.487 [2024-10-28 15:24:17.259155] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154880) on tqpair=0x10f2690 00:28:30.487 [2024-10-28 15:24:17.299763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.487 [2024-10-28 15:24:17.299783] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.487 [2024-10-28 15:24:17.299791] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.487 [2024-10-28 15:24:17.299799] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154700) on tqpair=0x10f2690 00:28:30.487 [2024-10-28 15:24:17.299819] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.487 [2024-10-28 15:24:17.299828] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10f2690) 00:28:30.487 [2024-10-28 15:24:17.299841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.487 [2024-10-28 15:24:17.299874] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154700, cid 4, qid 0 00:28:30.487 [2024-10-28 15:24:17.299984] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:30.487 [2024-10-28 15:24:17.300000] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:30.487 [2024-10-28 15:24:17.300008] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:30.487 [2024-10-28 15:24:17.300015] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10f2690): datao=0, datal=3072, cccid=4 00:28:30.487 [2024-10-28 15:24:17.300023] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1154700) on tqpair(0x10f2690): expected_datao=0, payload_size=3072 00:28:30.487 [2024-10-28 15:24:17.300032] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.487 [2024-10-28 15:24:17.300052] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:30.487 [2024-10-28 15:24:17.300062] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:30.487 [2024-10-28 15:24:17.300086] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.487 [2024-10-28 15:24:17.300099] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.487 [2024-10-28 15:24:17.300107] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.487 [2024-10-28 15:24:17.300114] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154700) on tqpair=0x10f2690 00:28:30.487 [2024-10-28 15:24:17.300131] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.487 [2024-10-28 15:24:17.300141] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10f2690) 00:28:30.487 [2024-10-28 15:24:17.300153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.487 [2024-10-28 15:24:17.300184] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154700, cid 4, qid 0 00:28:30.487 [2024-10-28 15:24:17.300291] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:30.487 [2024-10-28 15:24:17.300304] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:30.487 [2024-10-28 15:24:17.300312] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:30.487 [2024-10-28 15:24:17.300319] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10f2690): datao=0, datal=8, cccid=4 00:28:30.487 [2024-10-28 15:24:17.300327] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1154700) on tqpair(0x10f2690): expected_datao=0, payload_size=8 00:28:30.487 [2024-10-28 15:24:17.300336] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.487 [2024-10-28 15:24:17.300347] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:30.487 [2024-10-28 15:24:17.300355] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:30.487 [2024-10-28 15:24:17.343673] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.487 [2024-10-28 15:24:17.343705] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.487 [2024-10-28 15:24:17.343719] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.487 [2024-10-28 15:24:17.343727] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154700) on tqpair=0x10f2690 00:28:30.487 ===================================================== 00:28:30.487 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:30.487 ===================================================== 00:28:30.487 Controller Capabilities/Features 00:28:30.487 ================================ 00:28:30.487 Vendor ID: 0000 00:28:30.487 Subsystem Vendor ID: 0000 00:28:30.487 Serial Number: .................... 00:28:30.487 Model Number: ........................................ 00:28:30.487 Firmware Version: 25.01 00:28:30.487 Recommended Arb Burst: 0 00:28:30.487 IEEE OUI Identifier: 00 00 00 00:28:30.487 Multi-path I/O 00:28:30.487 May have multiple subsystem ports: No 00:28:30.487 May have multiple controllers: No 00:28:30.487 Associated with SR-IOV VF: No 00:28:30.487 Max Data Transfer Size: 131072 00:28:30.487 Max Number of Namespaces: 0 00:28:30.487 Max Number of I/O Queues: 1024 00:28:30.487 NVMe Specification Version (VS): 1.3 00:28:30.487 NVMe Specification Version (Identify): 1.3 00:28:30.487 Maximum Queue Entries: 128 00:28:30.487 Contiguous Queues Required: Yes 00:28:30.487 Arbitration Mechanisms Supported 00:28:30.487 Weighted Round Robin: Not Supported 00:28:30.487 Vendor Specific: Not Supported 00:28:30.487 Reset Timeout: 15000 ms 00:28:30.487 Doorbell Stride: 4 bytes 00:28:30.487 NVM Subsystem Reset: Not Supported 00:28:30.487 Command Sets Supported 00:28:30.487 NVM Command Set: Supported 00:28:30.487 Boot Partition: Not Supported 00:28:30.487 Memory Page Size Minimum: 4096 bytes 00:28:30.487 Memory Page Size Maximum: 4096 bytes 00:28:30.487 Persistent Memory Region: Not Supported 00:28:30.487 Optional Asynchronous Events Supported 00:28:30.487 Namespace Attribute Notices: Not Supported 00:28:30.487 Firmware Activation Notices: Not Supported 00:28:30.487 ANA Change Notices: Not Supported 00:28:30.487 PLE Aggregate Log Change Notices: Not Supported 00:28:30.487 LBA Status Info Alert Notices: Not Supported 00:28:30.487 EGE Aggregate Log Change Notices: Not Supported 00:28:30.487 Normal NVM Subsystem Shutdown event: Not Supported 00:28:30.487 Zone Descriptor Change Notices: Not Supported 00:28:30.487 Discovery Log Change Notices: Supported 00:28:30.487 Controller Attributes 00:28:30.487 128-bit Host Identifier: Not Supported 00:28:30.487 Non-Operational Permissive Mode: Not Supported 00:28:30.487 NVM Sets: Not Supported 00:28:30.487 Read Recovery Levels: Not Supported 00:28:30.487 Endurance Groups: Not Supported 00:28:30.487 Predictable Latency Mode: Not Supported 00:28:30.487 Traffic Based Keep ALive: Not Supported 00:28:30.487 Namespace Granularity: Not Supported 00:28:30.487 SQ Associations: Not Supported 00:28:30.487 UUID List: Not Supported 00:28:30.487 Multi-Domain Subsystem: Not Supported 00:28:30.487 Fixed Capacity Management: Not Supported 00:28:30.487 Variable Capacity Management: Not Supported 00:28:30.487 Delete Endurance Group: Not Supported 00:28:30.487 Delete NVM Set: Not Supported 00:28:30.487 Extended LBA Formats Supported: Not Supported 00:28:30.488 Flexible Data Placement Supported: Not Supported 00:28:30.488 00:28:30.488 Controller Memory Buffer Support 00:28:30.488 ================================ 00:28:30.488 Supported: No 00:28:30.488 00:28:30.488 Persistent Memory Region Support 00:28:30.488 ================================ 00:28:30.488 Supported: No 00:28:30.488 00:28:30.488 Admin Command Set Attributes 00:28:30.488 ============================ 00:28:30.488 Security Send/Receive: Not Supported 00:28:30.488 Format NVM: Not Supported 00:28:30.488 Firmware Activate/Download: Not Supported 00:28:30.488 Namespace Management: Not Supported 00:28:30.488 Device Self-Test: Not Supported 00:28:30.488 Directives: Not Supported 00:28:30.488 NVMe-MI: Not Supported 00:28:30.488 Virtualization Management: Not Supported 00:28:30.488 Doorbell Buffer Config: Not Supported 00:28:30.488 Get LBA Status Capability: Not Supported 00:28:30.488 Command & Feature Lockdown Capability: Not Supported 00:28:30.488 Abort Command Limit: 1 00:28:30.488 Async Event Request Limit: 4 00:28:30.488 Number of Firmware Slots: N/A 00:28:30.488 Firmware Slot 1 Read-Only: N/A 00:28:30.488 Firmware Activation Without Reset: N/A 00:28:30.488 Multiple Update Detection Support: N/A 00:28:30.488 Firmware Update Granularity: No Information Provided 00:28:30.488 Per-Namespace SMART Log: No 00:28:30.488 Asymmetric Namespace Access Log Page: Not Supported 00:28:30.488 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:30.488 Command Effects Log Page: Not Supported 00:28:30.488 Get Log Page Extended Data: Supported 00:28:30.488 Telemetry Log Pages: Not Supported 00:28:30.488 Persistent Event Log Pages: Not Supported 00:28:30.488 Supported Log Pages Log Page: May Support 00:28:30.488 Commands Supported & Effects Log Page: Not Supported 00:28:30.488 Feature Identifiers & Effects Log Page:May Support 00:28:30.488 NVMe-MI Commands & Effects Log Page: May Support 00:28:30.488 Data Area 4 for Telemetry Log: Not Supported 00:28:30.488 Error Log Page Entries Supported: 128 00:28:30.488 Keep Alive: Not Supported 00:28:30.488 00:28:30.488 NVM Command Set Attributes 00:28:30.488 ========================== 00:28:30.488 Submission Queue Entry Size 00:28:30.488 Max: 1 00:28:30.488 Min: 1 00:28:30.488 Completion Queue Entry Size 00:28:30.488 Max: 1 00:28:30.488 Min: 1 00:28:30.488 Number of Namespaces: 0 00:28:30.488 Compare Command: Not Supported 00:28:30.488 Write Uncorrectable Command: Not Supported 00:28:30.488 Dataset Management Command: Not Supported 00:28:30.488 Write Zeroes Command: Not Supported 00:28:30.488 Set Features Save Field: Not Supported 00:28:30.488 Reservations: Not Supported 00:28:30.488 Timestamp: Not Supported 00:28:30.488 Copy: Not Supported 00:28:30.488 Volatile Write Cache: Not Present 00:28:30.488 Atomic Write Unit (Normal): 1 00:28:30.488 Atomic Write Unit (PFail): 1 00:28:30.488 Atomic Compare & Write Unit: 1 00:28:30.488 Fused Compare & Write: Supported 00:28:30.488 Scatter-Gather List 00:28:30.488 SGL Command Set: Supported 00:28:30.488 SGL Keyed: Supported 00:28:30.488 SGL Bit Bucket Descriptor: Not Supported 00:28:30.488 SGL Metadata Pointer: Not Supported 00:28:30.488 Oversized SGL: Not Supported 00:28:30.488 SGL Metadata Address: Not Supported 00:28:30.488 SGL Offset: Supported 00:28:30.488 Transport SGL Data Block: Not Supported 00:28:30.488 Replay Protected Memory Block: Not Supported 00:28:30.488 00:28:30.488 Firmware Slot Information 00:28:30.488 ========================= 00:28:30.488 Active slot: 0 00:28:30.488 00:28:30.488 00:28:30.488 Error Log 00:28:30.488 ========= 00:28:30.488 00:28:30.488 Active Namespaces 00:28:30.488 ================= 00:28:30.488 Discovery Log Page 00:28:30.488 ================== 00:28:30.488 Generation Counter: 2 00:28:30.488 Number of Records: 2 00:28:30.488 Record Format: 0 00:28:30.488 00:28:30.488 Discovery Log Entry 0 00:28:30.488 ---------------------- 00:28:30.488 Transport Type: 3 (TCP) 00:28:30.488 Address Family: 1 (IPv4) 00:28:30.488 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:30.488 Entry Flags: 00:28:30.488 Duplicate Returned Information: 1 00:28:30.488 Explicit Persistent Connection Support for Discovery: 1 00:28:30.488 Transport Requirements: 00:28:30.488 Secure Channel: Not Required 00:28:30.488 Port ID: 0 (0x0000) 00:28:30.488 Controller ID: 65535 (0xffff) 00:28:30.488 Admin Max SQ Size: 128 00:28:30.488 Transport Service Identifier: 4420 00:28:30.488 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:30.488 Transport Address: 10.0.0.2 00:28:30.488 Discovery Log Entry 1 00:28:30.488 ---------------------- 00:28:30.488 Transport Type: 3 (TCP) 00:28:30.488 Address Family: 1 (IPv4) 00:28:30.488 Subsystem Type: 2 (NVM Subsystem) 00:28:30.488 Entry Flags: 00:28:30.488 Duplicate Returned Information: 0 00:28:30.488 Explicit Persistent Connection Support for Discovery: 0 00:28:30.488 Transport Requirements: 00:28:30.488 Secure Channel: Not Required 00:28:30.488 Port ID: 0 (0x0000) 00:28:30.488 Controller ID: 65535 (0xffff) 00:28:30.488 Admin Max SQ Size: 128 00:28:30.488 Transport Service Identifier: 4420 00:28:30.488 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:30.488 Transport Address: 10.0.0.2 [2024-10-28 15:24:17.343857] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:28:30.488 [2024-10-28 15:24:17.343883] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154100) on tqpair=0x10f2690 00:28:30.488 [2024-10-28 15:24:17.343898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.488 [2024-10-28 15:24:17.343908] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154280) on tqpair=0x10f2690 00:28:30.488 [2024-10-28 15:24:17.343917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.488 [2024-10-28 15:24:17.343926] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154400) on tqpair=0x10f2690 00:28:30.488 [2024-10-28 15:24:17.343934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.488 [2024-10-28 15:24:17.343943] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154580) on tqpair=0x10f2690 00:28:30.488 [2024-10-28 15:24:17.343952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.488 [2024-10-28 15:24:17.343967] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.488 [2024-10-28 15:24:17.343976] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.488 [2024-10-28 15:24:17.343983] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10f2690) 00:28:30.488 [2024-10-28 15:24:17.343995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.488 [2024-10-28 15:24:17.344024] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154580, cid 3, qid 0 00:28:30.488 [2024-10-28 15:24:17.344158] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.488 [2024-10-28 15:24:17.344175] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.488 [2024-10-28 15:24:17.344183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.488 [2024-10-28 15:24:17.344190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154580) on tqpair=0x10f2690 00:28:30.488 [2024-10-28 15:24:17.344209] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.488 [2024-10-28 15:24:17.344219] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.488 [2024-10-28 15:24:17.344226] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10f2690) 00:28:30.488 [2024-10-28 15:24:17.344238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.488 [2024-10-28 15:24:17.344270] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154580, cid 3, qid 0 00:28:30.488 [2024-10-28 15:24:17.344372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.488 [2024-10-28 15:24:17.344387] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.488 [2024-10-28 15:24:17.344395] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.488 [2024-10-28 15:24:17.344403] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154580) on tqpair=0x10f2690 00:28:30.488 [2024-10-28 15:24:17.344413] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:28:30.488 [2024-10-28 15:24:17.344422] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:28:30.488 [2024-10-28 15:24:17.344440] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.488 [2024-10-28 15:24:17.344450] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.488 [2024-10-28 15:24:17.344461] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10f2690) 00:28:30.488 [2024-10-28 15:24:17.344474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.488 [2024-10-28 15:24:17.344498] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154580, cid 3, qid 0 00:28:30.488 [2024-10-28 15:24:17.344619] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.488 [2024-10-28 15:24:17.344642] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.488 [2024-10-28 15:24:17.344674] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.488 [2024-10-28 15:24:17.344690] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154580) on tqpair=0x10f2690 00:28:30.488 [2024-10-28 15:24:17.344716] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.488 [2024-10-28 15:24:17.344727] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.488 [2024-10-28 15:24:17.344735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10f2690) 00:28:30.489 [2024-10-28 15:24:17.344747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.489 [2024-10-28 15:24:17.344772] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154580, cid 3, qid 0 00:28:30.489 [2024-10-28 15:24:17.344898] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.489 [2024-10-28 15:24:17.344916] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.489 [2024-10-28 15:24:17.344924] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.489 [2024-10-28 15:24:17.344932] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154580) on tqpair=0x10f2690 00:28:30.489 [2024-10-28 15:24:17.344958] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.489 [2024-10-28 15:24:17.344977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.489 [2024-10-28 15:24:17.344989] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10f2690) 00:28:30.489 [2024-10-28 15:24:17.345008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.489 [2024-10-28 15:24:17.345039] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154580, cid 3, qid 0 00:28:30.489 [2024-10-28 15:24:17.345168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.489 [2024-10-28 15:24:17.345184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.489 [2024-10-28 15:24:17.345192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.489 [2024-10-28 15:24:17.345200] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154580) on tqpair=0x10f2690 00:28:30.489 [2024-10-28 15:24:17.345219] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.489 [2024-10-28 15:24:17.345229] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.489 [2024-10-28 15:24:17.345236] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10f2690) 00:28:30.489 [2024-10-28 15:24:17.345248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.489 [2024-10-28 15:24:17.345272] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154580, cid 3, qid 0 00:28:30.489 [2024-10-28 15:24:17.345360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.489 [2024-10-28 15:24:17.345382] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.489 [2024-10-28 15:24:17.345396] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.489 [2024-10-28 15:24:17.345409] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154580) on tqpair=0x10f2690 00:28:30.489 [2024-10-28 15:24:17.345439] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.489 [2024-10-28 15:24:17.345452] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.489 [2024-10-28 15:24:17.345460] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10f2690) 00:28:30.489 [2024-10-28 15:24:17.345481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.489 [2024-10-28 15:24:17.345508] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154580, cid 3, qid 0 00:28:30.489 [2024-10-28 15:24:17.345626] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.489 [2024-10-28 15:24:17.345645] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.489 [2024-10-28 15:24:17.345665] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.489 [2024-10-28 15:24:17.345674] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154580) on tqpair=0x10f2690 00:28:30.489 [2024-10-28 15:24:17.345695] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.489 [2024-10-28 15:24:17.345705] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.489 [2024-10-28 15:24:17.345713] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10f2690) 00:28:30.489 [2024-10-28 15:24:17.345724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.489 [2024-10-28 15:24:17.345749] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154580, cid 3, qid 0 00:28:30.489 [2024-10-28 15:24:17.345841] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.489 [2024-10-28 15:24:17.345857] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.489 [2024-10-28 15:24:17.345865] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.489 [2024-10-28 15:24:17.345872] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154580) on tqpair=0x10f2690 00:28:30.489 [2024-10-28 15:24:17.345890] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.489 [2024-10-28 15:24:17.345901] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.489 [2024-10-28 15:24:17.345908] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10f2690) 00:28:30.489 [2024-10-28 15:24:17.345919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.489 [2024-10-28 15:24:17.345943] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154580, cid 3, qid 0 00:28:30.489 [2024-10-28 15:24:17.346028] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.489 [2024-10-28 15:24:17.346043] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.489 [2024-10-28 15:24:17.346051] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.489 [2024-10-28 15:24:17.346058] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154580) on tqpair=0x10f2690 00:28:30.489 [2024-10-28 15:24:17.346076] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.489 [2024-10-28 15:24:17.346086] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.489 [2024-10-28 15:24:17.346093] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10f2690) 00:28:30.489 [2024-10-28 15:24:17.346105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.489 [2024-10-28 15:24:17.346128] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154580, cid 3, qid 0 00:28:30.489 [2024-10-28 15:24:17.346215] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.489 [2024-10-28 15:24:17.346228] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.489 [2024-10-28 15:24:17.346236] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.489 [2024-10-28 15:24:17.346243] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154580) on tqpair=0x10f2690 00:28:30.489 [2024-10-28 15:24:17.346261] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.489 [2024-10-28 15:24:17.346271] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.489 [2024-10-28 15:24:17.346278] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10f2690) 00:28:30.489 [2024-10-28 15:24:17.346290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.489 [2024-10-28 15:24:17.346318] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154580, cid 3, qid 0 00:28:30.489 [2024-10-28 15:24:17.346404] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.489 [2024-10-28 15:24:17.346417] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.489 [2024-10-28 15:24:17.346425] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.489 [2024-10-28 15:24:17.346432] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154580) on tqpair=0x10f2690 00:28:30.489 [2024-10-28 15:24:17.346450] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.489 [2024-10-28 15:24:17.346460] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.489 [2024-10-28 15:24:17.346467] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10f2690) 00:28:30.489 [2024-10-28 15:24:17.346479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.489 [2024-10-28 15:24:17.346506] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154580, cid 3, qid 0 00:28:30.489 [2024-10-28 15:24:17.346597] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.489 [2024-10-28 15:24:17.346615] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.489 [2024-10-28 15:24:17.346623] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.489 [2024-10-28 15:24:17.346631] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154580) on tqpair=0x10f2690 00:28:30.489 [2024-10-28 15:24:17.346661] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.489 [2024-10-28 15:24:17.346675] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.489 [2024-10-28 15:24:17.346683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10f2690) 00:28:30.489 [2024-10-28 15:24:17.346695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.489 [2024-10-28 15:24:17.346732] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154580, cid 3, qid 0 00:28:30.489 [2024-10-28 15:24:17.346855] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.489 [2024-10-28 15:24:17.346877] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.489 [2024-10-28 15:24:17.346891] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.489 [2024-10-28 15:24:17.346900] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154580) on tqpair=0x10f2690 00:28:30.490 [2024-10-28 15:24:17.346921] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.490 [2024-10-28 15:24:17.346932] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.490 [2024-10-28 15:24:17.346940] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10f2690) 00:28:30.490 [2024-10-28 15:24:17.346951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.490 [2024-10-28 15:24:17.346976] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154580, cid 3, qid 0 00:28:30.490 [2024-10-28 15:24:17.347067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.490 [2024-10-28 15:24:17.347083] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.490 [2024-10-28 15:24:17.347090] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.490 [2024-10-28 15:24:17.347098] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154580) on tqpair=0x10f2690 00:28:30.490 [2024-10-28 15:24:17.347116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.490 [2024-10-28 15:24:17.347126] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.490 [2024-10-28 15:24:17.347133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10f2690) 00:28:30.490 [2024-10-28 15:24:17.347145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.490 [2024-10-28 15:24:17.347174] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154580, cid 3, qid 0 00:28:30.751 [2024-10-28 15:24:17.347263] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.751 [2024-10-28 15:24:17.347279] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.751 [2024-10-28 15:24:17.347286] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.751 [2024-10-28 15:24:17.347294] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154580) on tqpair=0x10f2690 00:28:30.751 [2024-10-28 15:24:17.347312] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.751 [2024-10-28 15:24:17.347322] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.751 [2024-10-28 15:24:17.347330] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10f2690) 00:28:30.751 [2024-10-28 15:24:17.347341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-10-28 15:24:17.347369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154580, cid 3, qid 0 00:28:30.751 [2024-10-28 15:24:17.347452] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.751 [2024-10-28 15:24:17.347472] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.751 [2024-10-28 15:24:17.347485] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.751 [2024-10-28 15:24:17.347499] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154580) on tqpair=0x10f2690 00:28:30.751 [2024-10-28 15:24:17.347530] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.751 [2024-10-28 15:24:17.347550] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.751 [2024-10-28 15:24:17.347562] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10f2690) 00:28:30.751 [2024-10-28 15:24:17.347580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-10-28 15:24:17.347612] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154580, cid 3, qid 0 00:28:30.751 [2024-10-28 15:24:17.351667] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.751 [2024-10-28 15:24:17.351688] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.751 [2024-10-28 15:24:17.351697] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.751 [2024-10-28 15:24:17.351705] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154580) on tqpair=0x10f2690 00:28:30.751 [2024-10-28 15:24:17.351726] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.751 [2024-10-28 15:24:17.351736] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.751 [2024-10-28 15:24:17.351744] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10f2690) 00:28:30.751 [2024-10-28 15:24:17.351756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-10-28 15:24:17.351782] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1154580, cid 3, qid 0 00:28:30.751 [2024-10-28 15:24:17.351920] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.751 [2024-10-28 15:24:17.351934] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.751 [2024-10-28 15:24:17.351942] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.751 [2024-10-28 15:24:17.351950] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1154580) on tqpair=0x10f2690 00:28:30.751 [2024-10-28 15:24:17.351964] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:28:30.751 00:28:30.751 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:30.751 [2024-10-28 15:24:17.388398] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:28:30.751 [2024-10-28 15:24:17.388447] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3259035 ] 00:28:30.751 [2024-10-28 15:24:17.455050] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:28:30.751 [2024-10-28 15:24:17.455111] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:30.751 [2024-10-28 15:24:17.455123] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:30.751 [2024-10-28 15:24:17.455139] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:30.751 [2024-10-28 15:24:17.455153] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:30.751 [2024-10-28 15:24:17.455632] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:28:30.751 [2024-10-28 15:24:17.455687] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x131e690 0 00:28:30.751 [2024-10-28 15:24:17.469667] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:30.751 [2024-10-28 15:24:17.469690] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:30.751 [2024-10-28 15:24:17.469699] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:30.751 [2024-10-28 15:24:17.469706] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:30.751 [2024-10-28 15:24:17.469741] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.751 [2024-10-28 15:24:17.469755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.751 [2024-10-28 15:24:17.469762] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x131e690) 00:28:30.751 [2024-10-28 15:24:17.469777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:30.751 [2024-10-28 15:24:17.469807] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380100, cid 0, qid 0 00:28:30.751 [2024-10-28 15:24:17.476664] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.751 [2024-10-28 15:24:17.476685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.751 [2024-10-28 15:24:17.476704] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.751 [2024-10-28 15:24:17.476713] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380100) on tqpair=0x131e690 00:28:30.751 [2024-10-28 15:24:17.476736] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:30.751 [2024-10-28 15:24:17.476749] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:28:30.751 [2024-10-28 15:24:17.476760] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:28:30.751 [2024-10-28 15:24:17.476780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.751 [2024-10-28 15:24:17.476790] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.751 [2024-10-28 15:24:17.476797] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x131e690) 00:28:30.751 [2024-10-28 15:24:17.476810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-10-28 15:24:17.476838] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380100, cid 0, qid 0 00:28:30.751 [2024-10-28 15:24:17.476938] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.751 [2024-10-28 15:24:17.476951] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.751 [2024-10-28 15:24:17.476959] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.751 [2024-10-28 15:24:17.476967] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380100) on tqpair=0x131e690 00:28:30.751 [2024-10-28 15:24:17.476980] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:28:30.751 [2024-10-28 15:24:17.476996] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:28:30.751 [2024-10-28 15:24:17.477009] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.751 [2024-10-28 15:24:17.477017] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.751 [2024-10-28 15:24:17.477025] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x131e690) 00:28:30.751 [2024-10-28 15:24:17.477036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-10-28 15:24:17.477061] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380100, cid 0, qid 0 00:28:30.751 [2024-10-28 15:24:17.477160] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.751 [2024-10-28 15:24:17.477176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.751 [2024-10-28 15:24:17.477184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.751 [2024-10-28 15:24:17.477191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380100) on tqpair=0x131e690 00:28:30.751 [2024-10-28 15:24:17.477201] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:28:30.752 [2024-10-28 15:24:17.477216] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:28:30.752 [2024-10-28 15:24:17.477230] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.752 [2024-10-28 15:24:17.477238] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.752 [2024-10-28 15:24:17.477245] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x131e690) 00:28:30.752 [2024-10-28 15:24:17.477257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.752 [2024-10-28 15:24:17.477280] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380100, cid 0, qid 0 00:28:30.752 [2024-10-28 15:24:17.477377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.752 [2024-10-28 15:24:17.477392] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.752 [2024-10-28 15:24:17.477400] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.752 [2024-10-28 15:24:17.477408] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380100) on tqpair=0x131e690 00:28:30.752 [2024-10-28 15:24:17.477417] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:30.752 [2024-10-28 15:24:17.477441] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.752 [2024-10-28 15:24:17.477452] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.752 [2024-10-28 15:24:17.477460] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x131e690) 00:28:30.752 [2024-10-28 15:24:17.477471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.752 [2024-10-28 15:24:17.477495] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380100, cid 0, qid 0 00:28:30.752 [2024-10-28 15:24:17.477584] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.752 [2024-10-28 15:24:17.477599] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.752 [2024-10-28 15:24:17.477607] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.752 [2024-10-28 15:24:17.477614] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380100) on tqpair=0x131e690 00:28:30.752 [2024-10-28 15:24:17.477623] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:28:30.752 [2024-10-28 15:24:17.477632] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:28:30.752 [2024-10-28 15:24:17.477661] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:30.752 [2024-10-28 15:24:17.477774] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:28:30.752 [2024-10-28 15:24:17.477783] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:30.752 [2024-10-28 15:24:17.477797] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.752 [2024-10-28 15:24:17.477805] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.752 [2024-10-28 15:24:17.477812] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x131e690) 00:28:30.752 [2024-10-28 15:24:17.477824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.752 [2024-10-28 15:24:17.477848] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380100, cid 0, qid 0 00:28:30.752 [2024-10-28 15:24:17.477947] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.752 [2024-10-28 15:24:17.477961] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.752 [2024-10-28 15:24:17.477968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.752 [2024-10-28 15:24:17.477976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380100) on tqpair=0x131e690 00:28:30.752 [2024-10-28 15:24:17.477985] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:30.752 [2024-10-28 15:24:17.478002] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.752 [2024-10-28 15:24:17.478012] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.752 [2024-10-28 15:24:17.478019] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x131e690) 00:28:30.752 [2024-10-28 15:24:17.478031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.752 [2024-10-28 15:24:17.478053] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380100, cid 0, qid 0 00:28:30.752 [2024-10-28 15:24:17.478161] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.752 [2024-10-28 15:24:17.478174] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.752 [2024-10-28 15:24:17.478182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.752 [2024-10-28 15:24:17.478189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380100) on tqpair=0x131e690 00:28:30.752 [2024-10-28 15:24:17.478198] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:30.752 [2024-10-28 15:24:17.478207] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:28:30.752 [2024-10-28 15:24:17.478222] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:28:30.752 [2024-10-28 15:24:17.478242] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:28:30.752 [2024-10-28 15:24:17.478258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.752 [2024-10-28 15:24:17.478266] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x131e690) 00:28:30.752 [2024-10-28 15:24:17.478278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.752 [2024-10-28 15:24:17.478301] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380100, cid 0, qid 0 00:28:30.752 [2024-10-28 15:24:17.478441] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:30.752 [2024-10-28 15:24:17.478462] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:30.752 [2024-10-28 15:24:17.478470] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:30.752 [2024-10-28 15:24:17.478478] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x131e690): datao=0, datal=4096, cccid=0 00:28:30.752 [2024-10-28 15:24:17.478486] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1380100) on tqpair(0x131e690): expected_datao=0, payload_size=4096 00:28:30.752 [2024-10-28 15:24:17.478494] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.752 [2024-10-28 15:24:17.478514] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:30.752 [2024-10-28 15:24:17.478524] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:30.752 [2024-10-28 15:24:17.518738] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.752 [2024-10-28 15:24:17.518758] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.752 [2024-10-28 15:24:17.518767] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.752 [2024-10-28 15:24:17.518775] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380100) on tqpair=0x131e690 00:28:30.752 [2024-10-28 15:24:17.518787] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:28:30.752 [2024-10-28 15:24:17.518797] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:28:30.752 [2024-10-28 15:24:17.518805] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:28:30.752 [2024-10-28 15:24:17.518813] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:28:30.752 [2024-10-28 15:24:17.518822] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:28:30.752 [2024-10-28 15:24:17.518831] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:28:30.752 [2024-10-28 15:24:17.518847] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:28:30.752 [2024-10-28 15:24:17.518861] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.752 [2024-10-28 15:24:17.518870] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.752 [2024-10-28 15:24:17.518877] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x131e690) 00:28:30.752 [2024-10-28 15:24:17.518889] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:30.752 [2024-10-28 15:24:17.518915] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380100, cid 0, qid 0 00:28:30.752 [2024-10-28 15:24:17.519016] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.752 [2024-10-28 15:24:17.519031] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.752 [2024-10-28 15:24:17.519039] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.752 [2024-10-28 15:24:17.519046] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380100) on tqpair=0x131e690 00:28:30.752 [2024-10-28 15:24:17.519063] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.752 [2024-10-28 15:24:17.519073] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.752 [2024-10-28 15:24:17.519080] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x131e690) 00:28:30.752 [2024-10-28 15:24:17.519091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.752 [2024-10-28 15:24:17.519102] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.752 [2024-10-28 15:24:17.519110] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.752 [2024-10-28 15:24:17.519116] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x131e690) 00:28:30.752 [2024-10-28 15:24:17.519126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.752 [2024-10-28 15:24:17.519143] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.752 [2024-10-28 15:24:17.519152] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.752 [2024-10-28 15:24:17.519159] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x131e690) 00:28:30.752 [2024-10-28 15:24:17.519168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.752 [2024-10-28 15:24:17.519179] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.752 [2024-10-28 15:24:17.519186] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.752 [2024-10-28 15:24:17.519193] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131e690) 00:28:30.752 [2024-10-28 15:24:17.519203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.752 [2024-10-28 15:24:17.519213] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:30.752 [2024-10-28 15:24:17.519229] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:30.752 [2024-10-28 15:24:17.519242] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.752 [2024-10-28 15:24:17.519250] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x131e690) 00:28:30.752 [2024-10-28 15:24:17.519262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.753 [2024-10-28 15:24:17.519287] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380100, cid 0, qid 0 00:28:30.753 [2024-10-28 15:24:17.519300] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380280, cid 1, qid 0 00:28:30.753 [2024-10-28 15:24:17.519309] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380400, cid 2, qid 0 00:28:30.753 [2024-10-28 15:24:17.519317] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380580, cid 3, qid 0 00:28:30.753 [2024-10-28 15:24:17.519326] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380700, cid 4, qid 0 00:28:30.753 [2024-10-28 15:24:17.519450] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.753 [2024-10-28 15:24:17.519466] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.753 [2024-10-28 15:24:17.519474] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.753 [2024-10-28 15:24:17.519481] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380700) on tqpair=0x131e690 00:28:30.753 [2024-10-28 15:24:17.519495] nvme_ctrlr.c:3023:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:28:30.753 [2024-10-28 15:24:17.519505] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:30.753 [2024-10-28 15:24:17.519522] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:28:30.753 [2024-10-28 15:24:17.519535] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:30.753 [2024-10-28 15:24:17.519547] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.753 [2024-10-28 15:24:17.519555] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.753 [2024-10-28 15:24:17.519562] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x131e690) 00:28:30.753 [2024-10-28 15:24:17.519574] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:30.753 [2024-10-28 15:24:17.519598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380700, cid 4, qid 0 00:28:30.753 [2024-10-28 15:24:17.519706] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.753 [2024-10-28 15:24:17.519727] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.753 [2024-10-28 15:24:17.519735] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.753 [2024-10-28 15:24:17.519743] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380700) on tqpair=0x131e690 00:28:30.753 [2024-10-28 15:24:17.519819] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:28:30.753 [2024-10-28 15:24:17.519842] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:30.753 [2024-10-28 15:24:17.519859] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.753 [2024-10-28 15:24:17.519868] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x131e690) 00:28:30.753 [2024-10-28 15:24:17.519880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.753 [2024-10-28 15:24:17.519904] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380700, cid 4, qid 0 00:28:30.753 [2024-10-28 15:24:17.520033] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:30.753 [2024-10-28 15:24:17.520047] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:30.753 [2024-10-28 15:24:17.520055] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:30.753 [2024-10-28 15:24:17.520061] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x131e690): datao=0, datal=4096, cccid=4 00:28:30.753 [2024-10-28 15:24:17.520070] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1380700) on tqpair(0x131e690): expected_datao=0, payload_size=4096 00:28:30.753 [2024-10-28 15:24:17.520078] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.753 [2024-10-28 15:24:17.520090] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:30.753 [2024-10-28 15:24:17.520098] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:30.753 [2024-10-28 15:24:17.520111] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.753 [2024-10-28 15:24:17.520121] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.753 [2024-10-28 15:24:17.520129] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.753 [2024-10-28 15:24:17.520136] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380700) on tqpair=0x131e690 00:28:30.753 [2024-10-28 15:24:17.520154] nvme_ctrlr.c:4699:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:28:30.753 [2024-10-28 15:24:17.520174] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:28:30.753 [2024-10-28 15:24:17.520194] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:28:30.753 [2024-10-28 15:24:17.520209] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.753 [2024-10-28 15:24:17.520218] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x131e690) 00:28:30.753 [2024-10-28 15:24:17.520230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.753 [2024-10-28 15:24:17.520253] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380700, cid 4, qid 0 00:28:30.753 [2024-10-28 15:24:17.520390] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:30.753 [2024-10-28 15:24:17.520406] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:30.753 [2024-10-28 15:24:17.520414] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:30.753 [2024-10-28 15:24:17.520421] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x131e690): datao=0, datal=4096, cccid=4 00:28:30.753 [2024-10-28 15:24:17.520429] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1380700) on tqpair(0x131e690): expected_datao=0, payload_size=4096 00:28:30.753 [2024-10-28 15:24:17.520441] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.753 [2024-10-28 15:24:17.520454] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:30.753 [2024-10-28 15:24:17.520462] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:30.753 [2024-10-28 15:24:17.520475] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.753 [2024-10-28 15:24:17.520486] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.753 [2024-10-28 15:24:17.520493] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.753 [2024-10-28 15:24:17.520500] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380700) on tqpair=0x131e690 00:28:30.753 [2024-10-28 15:24:17.520525] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:30.753 [2024-10-28 15:24:17.520546] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:30.753 [2024-10-28 15:24:17.520562] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.753 [2024-10-28 15:24:17.520571] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x131e690) 00:28:30.753 [2024-10-28 15:24:17.520583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.753 [2024-10-28 15:24:17.520606] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380700, cid 4, qid 0 00:28:30.753 [2024-10-28 15:24:17.524666] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:30.753 [2024-10-28 15:24:17.524684] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:30.753 [2024-10-28 15:24:17.524692] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:30.753 [2024-10-28 15:24:17.524699] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x131e690): datao=0, datal=4096, cccid=4 00:28:30.753 [2024-10-28 15:24:17.524707] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1380700) on tqpair(0x131e690): expected_datao=0, payload_size=4096 00:28:30.753 [2024-10-28 15:24:17.524715] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.753 [2024-10-28 15:24:17.524726] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:30.753 [2024-10-28 15:24:17.524735] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:30.753 [2024-10-28 15:24:17.524744] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.753 [2024-10-28 15:24:17.524754] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.753 [2024-10-28 15:24:17.524761] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.753 [2024-10-28 15:24:17.524768] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380700) on tqpair=0x131e690 00:28:30.753 [2024-10-28 15:24:17.524784] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:30.753 [2024-10-28 15:24:17.524802] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:28:30.753 [2024-10-28 15:24:17.524820] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:28:30.753 [2024-10-28 15:24:17.524833] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:28:30.753 [2024-10-28 15:24:17.524843] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:30.753 [2024-10-28 15:24:17.524852] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:28:30.753 [2024-10-28 15:24:17.524863] nvme_ctrlr.c:3111:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:28:30.753 [2024-10-28 15:24:17.524871] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:28:30.753 [2024-10-28 15:24:17.524885] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:28:30.753 [2024-10-28 15:24:17.524905] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.753 [2024-10-28 15:24:17.524915] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x131e690) 00:28:30.753 [2024-10-28 15:24:17.524927] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.753 [2024-10-28 15:24:17.524939] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.753 [2024-10-28 15:24:17.524947] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.753 [2024-10-28 15:24:17.524954] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x131e690) 00:28:30.753 [2024-10-28 15:24:17.524964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.753 [2024-10-28 15:24:17.524993] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380700, cid 4, qid 0 00:28:30.753 [2024-10-28 15:24:17.525007] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380880, cid 5, qid 0 00:28:30.753 [2024-10-28 15:24:17.525125] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.753 [2024-10-28 15:24:17.525139] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.753 [2024-10-28 15:24:17.525147] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.753 [2024-10-28 15:24:17.525155] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380700) on tqpair=0x131e690 00:28:30.753 [2024-10-28 15:24:17.525166] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.753 [2024-10-28 15:24:17.525176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.753 [2024-10-28 15:24:17.525183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.753 [2024-10-28 15:24:17.525190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380880) on tqpair=0x131e690 00:28:30.753 [2024-10-28 15:24:17.525207] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.754 [2024-10-28 15:24:17.525217] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x131e690) 00:28:30.754 [2024-10-28 15:24:17.525229] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.754 [2024-10-28 15:24:17.525252] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380880, cid 5, qid 0 00:28:30.754 [2024-10-28 15:24:17.525351] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.754 [2024-10-28 15:24:17.525366] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.754 [2024-10-28 15:24:17.525374] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.754 [2024-10-28 15:24:17.525381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380880) on tqpair=0x131e690 00:28:30.754 [2024-10-28 15:24:17.525399] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.754 [2024-10-28 15:24:17.525409] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x131e690) 00:28:30.754 [2024-10-28 15:24:17.525421] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.754 [2024-10-28 15:24:17.525444] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380880, cid 5, qid 0 00:28:30.754 [2024-10-28 15:24:17.525536] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.754 [2024-10-28 15:24:17.525549] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.754 [2024-10-28 15:24:17.525556] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.754 [2024-10-28 15:24:17.525564] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380880) on tqpair=0x131e690 00:28:30.754 [2024-10-28 15:24:17.525581] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.754 [2024-10-28 15:24:17.525595] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x131e690) 00:28:30.754 [2024-10-28 15:24:17.525607] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.754 [2024-10-28 15:24:17.525630] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380880, cid 5, qid 0 00:28:30.754 [2024-10-28 15:24:17.525732] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.754 [2024-10-28 15:24:17.525748] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.754 [2024-10-28 15:24:17.525756] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.754 [2024-10-28 15:24:17.525763] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380880) on tqpair=0x131e690 00:28:30.754 [2024-10-28 15:24:17.525790] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.754 [2024-10-28 15:24:17.525802] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x131e690) 00:28:30.754 [2024-10-28 15:24:17.525814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.754 [2024-10-28 15:24:17.525827] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.754 [2024-10-28 15:24:17.525835] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x131e690) 00:28:30.754 [2024-10-28 15:24:17.525846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.754 [2024-10-28 15:24:17.525858] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.754 [2024-10-28 15:24:17.525867] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x131e690) 00:28:30.754 [2024-10-28 15:24:17.525877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.754 [2024-10-28 15:24:17.525894] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.754 [2024-10-28 15:24:17.525904] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x131e690) 00:28:30.754 [2024-10-28 15:24:17.525915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.754 [2024-10-28 15:24:17.525940] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380880, cid 5, qid 0 00:28:30.754 [2024-10-28 15:24:17.525952] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380700, cid 4, qid 0 00:28:30.754 [2024-10-28 15:24:17.525961] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380a00, cid 6, qid 0 00:28:30.754 [2024-10-28 15:24:17.525970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380b80, cid 7, qid 0 00:28:30.754 [2024-10-28 15:24:17.526182] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:30.754 [2024-10-28 15:24:17.526198] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:30.754 [2024-10-28 15:24:17.526206] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:30.754 [2024-10-28 15:24:17.526213] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x131e690): datao=0, datal=8192, cccid=5 00:28:30.754 [2024-10-28 15:24:17.526222] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1380880) on tqpair(0x131e690): expected_datao=0, payload_size=8192 00:28:30.754 [2024-10-28 15:24:17.526230] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.754 [2024-10-28 15:24:17.526241] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:30.754 [2024-10-28 15:24:17.526250] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:30.754 [2024-10-28 15:24:17.526259] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:30.754 [2024-10-28 15:24:17.526269] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:30.754 [2024-10-28 15:24:17.526280] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:30.754 [2024-10-28 15:24:17.526288] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x131e690): datao=0, datal=512, cccid=4 00:28:30.754 [2024-10-28 15:24:17.526296] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1380700) on tqpair(0x131e690): expected_datao=0, payload_size=512 00:28:30.754 [2024-10-28 15:24:17.526304] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.754 [2024-10-28 15:24:17.526314] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:30.754 [2024-10-28 15:24:17.526322] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:30.754 [2024-10-28 15:24:17.526331] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:30.754 [2024-10-28 15:24:17.526340] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:30.754 [2024-10-28 15:24:17.526347] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:30.754 [2024-10-28 15:24:17.526354] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x131e690): datao=0, datal=512, cccid=6 00:28:30.754 [2024-10-28 15:24:17.526362] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1380a00) on tqpair(0x131e690): expected_datao=0, payload_size=512 00:28:30.754 [2024-10-28 15:24:17.526370] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.754 [2024-10-28 15:24:17.526380] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:30.754 [2024-10-28 15:24:17.526388] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:30.754 [2024-10-28 15:24:17.526397] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:30.754 [2024-10-28 15:24:17.526407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:30.754 [2024-10-28 15:24:17.526414] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:30.754 [2024-10-28 15:24:17.526420] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x131e690): datao=0, datal=4096, cccid=7 00:28:30.754 [2024-10-28 15:24:17.526429] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1380b80) on tqpair(0x131e690): expected_datao=0, payload_size=4096 00:28:30.754 [2024-10-28 15:24:17.526437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.754 [2024-10-28 15:24:17.526457] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:30.754 [2024-10-28 15:24:17.526467] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:30.754 [2024-10-28 15:24:17.526479] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.754 [2024-10-28 15:24:17.526490] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.754 [2024-10-28 15:24:17.526497] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.754 [2024-10-28 15:24:17.526505] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380880) on tqpair=0x131e690 00:28:30.754 [2024-10-28 15:24:17.526528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.754 [2024-10-28 15:24:17.526541] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.754 [2024-10-28 15:24:17.526549] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.754 [2024-10-28 15:24:17.526556] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380700) on tqpair=0x131e690 00:28:30.754 [2024-10-28 15:24:17.526573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.754 [2024-10-28 15:24:17.526585] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.754 [2024-10-28 15:24:17.526592] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.754 [2024-10-28 15:24:17.526599] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380a00) on tqpair=0x131e690 00:28:30.754 [2024-10-28 15:24:17.526611] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.754 [2024-10-28 15:24:17.526622] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.754 [2024-10-28 15:24:17.526629] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.754 [2024-10-28 15:24:17.526636] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380b80) on tqpair=0x131e690 00:28:30.754 ===================================================== 00:28:30.754 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:30.754 ===================================================== 00:28:30.754 Controller Capabilities/Features 00:28:30.754 ================================ 00:28:30.754 Vendor ID: 8086 00:28:30.754 Subsystem Vendor ID: 8086 00:28:30.754 Serial Number: SPDK00000000000001 00:28:30.754 Model Number: SPDK bdev Controller 00:28:30.754 Firmware Version: 25.01 00:28:30.754 Recommended Arb Burst: 6 00:28:30.754 IEEE OUI Identifier: e4 d2 5c 00:28:30.754 Multi-path I/O 00:28:30.754 May have multiple subsystem ports: Yes 00:28:30.754 May have multiple controllers: Yes 00:28:30.754 Associated with SR-IOV VF: No 00:28:30.754 Max Data Transfer Size: 131072 00:28:30.754 Max Number of Namespaces: 32 00:28:30.754 Max Number of I/O Queues: 127 00:28:30.754 NVMe Specification Version (VS): 1.3 00:28:30.754 NVMe Specification Version (Identify): 1.3 00:28:30.754 Maximum Queue Entries: 128 00:28:30.754 Contiguous Queues Required: Yes 00:28:30.754 Arbitration Mechanisms Supported 00:28:30.754 Weighted Round Robin: Not Supported 00:28:30.754 Vendor Specific: Not Supported 00:28:30.754 Reset Timeout: 15000 ms 00:28:30.754 Doorbell Stride: 4 bytes 00:28:30.754 NVM Subsystem Reset: Not Supported 00:28:30.754 Command Sets Supported 00:28:30.754 NVM Command Set: Supported 00:28:30.754 Boot Partition: Not Supported 00:28:30.754 Memory Page Size Minimum: 4096 bytes 00:28:30.754 Memory Page Size Maximum: 4096 bytes 00:28:30.754 Persistent Memory Region: Not Supported 00:28:30.754 Optional Asynchronous Events Supported 00:28:30.754 Namespace Attribute Notices: Supported 00:28:30.754 Firmware Activation Notices: Not Supported 00:28:30.754 ANA Change Notices: Not Supported 00:28:30.754 PLE Aggregate Log Change Notices: Not Supported 00:28:30.755 LBA Status Info Alert Notices: Not Supported 00:28:30.755 EGE Aggregate Log Change Notices: Not Supported 00:28:30.755 Normal NVM Subsystem Shutdown event: Not Supported 00:28:30.755 Zone Descriptor Change Notices: Not Supported 00:28:30.755 Discovery Log Change Notices: Not Supported 00:28:30.755 Controller Attributes 00:28:30.755 128-bit Host Identifier: Supported 00:28:30.755 Non-Operational Permissive Mode: Not Supported 00:28:30.755 NVM Sets: Not Supported 00:28:30.755 Read Recovery Levels: Not Supported 00:28:30.755 Endurance Groups: Not Supported 00:28:30.755 Predictable Latency Mode: Not Supported 00:28:30.755 Traffic Based Keep ALive: Not Supported 00:28:30.755 Namespace Granularity: Not Supported 00:28:30.755 SQ Associations: Not Supported 00:28:30.755 UUID List: Not Supported 00:28:30.755 Multi-Domain Subsystem: Not Supported 00:28:30.755 Fixed Capacity Management: Not Supported 00:28:30.755 Variable Capacity Management: Not Supported 00:28:30.755 Delete Endurance Group: Not Supported 00:28:30.755 Delete NVM Set: Not Supported 00:28:30.755 Extended LBA Formats Supported: Not Supported 00:28:30.755 Flexible Data Placement Supported: Not Supported 00:28:30.755 00:28:30.755 Controller Memory Buffer Support 00:28:30.755 ================================ 00:28:30.755 Supported: No 00:28:30.755 00:28:30.755 Persistent Memory Region Support 00:28:30.755 ================================ 00:28:30.755 Supported: No 00:28:30.755 00:28:30.755 Admin Command Set Attributes 00:28:30.755 ============================ 00:28:30.755 Security Send/Receive: Not Supported 00:28:30.755 Format NVM: Not Supported 00:28:30.755 Firmware Activate/Download: Not Supported 00:28:30.755 Namespace Management: Not Supported 00:28:30.755 Device Self-Test: Not Supported 00:28:30.755 Directives: Not Supported 00:28:30.755 NVMe-MI: Not Supported 00:28:30.755 Virtualization Management: Not Supported 00:28:30.755 Doorbell Buffer Config: Not Supported 00:28:30.755 Get LBA Status Capability: Not Supported 00:28:30.755 Command & Feature Lockdown Capability: Not Supported 00:28:30.755 Abort Command Limit: 4 00:28:30.755 Async Event Request Limit: 4 00:28:30.755 Number of Firmware Slots: N/A 00:28:30.755 Firmware Slot 1 Read-Only: N/A 00:28:30.755 Firmware Activation Without Reset: N/A 00:28:30.755 Multiple Update Detection Support: N/A 00:28:30.755 Firmware Update Granularity: No Information Provided 00:28:30.755 Per-Namespace SMART Log: No 00:28:30.755 Asymmetric Namespace Access Log Page: Not Supported 00:28:30.755 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:30.755 Command Effects Log Page: Supported 00:28:30.755 Get Log Page Extended Data: Supported 00:28:30.755 Telemetry Log Pages: Not Supported 00:28:30.755 Persistent Event Log Pages: Not Supported 00:28:30.755 Supported Log Pages Log Page: May Support 00:28:30.755 Commands Supported & Effects Log Page: Not Supported 00:28:30.755 Feature Identifiers & Effects Log Page:May Support 00:28:30.755 NVMe-MI Commands & Effects Log Page: May Support 00:28:30.755 Data Area 4 for Telemetry Log: Not Supported 00:28:30.755 Error Log Page Entries Supported: 128 00:28:30.755 Keep Alive: Supported 00:28:30.755 Keep Alive Granularity: 10000 ms 00:28:30.755 00:28:30.755 NVM Command Set Attributes 00:28:30.755 ========================== 00:28:30.755 Submission Queue Entry Size 00:28:30.755 Max: 64 00:28:30.755 Min: 64 00:28:30.755 Completion Queue Entry Size 00:28:30.755 Max: 16 00:28:30.755 Min: 16 00:28:30.755 Number of Namespaces: 32 00:28:30.755 Compare Command: Supported 00:28:30.755 Write Uncorrectable Command: Not Supported 00:28:30.755 Dataset Management Command: Supported 00:28:30.755 Write Zeroes Command: Supported 00:28:30.755 Set Features Save Field: Not Supported 00:28:30.755 Reservations: Supported 00:28:30.755 Timestamp: Not Supported 00:28:30.755 Copy: Supported 00:28:30.755 Volatile Write Cache: Present 00:28:30.755 Atomic Write Unit (Normal): 1 00:28:30.755 Atomic Write Unit (PFail): 1 00:28:30.755 Atomic Compare & Write Unit: 1 00:28:30.755 Fused Compare & Write: Supported 00:28:30.755 Scatter-Gather List 00:28:30.755 SGL Command Set: Supported 00:28:30.755 SGL Keyed: Supported 00:28:30.755 SGL Bit Bucket Descriptor: Not Supported 00:28:30.755 SGL Metadata Pointer: Not Supported 00:28:30.755 Oversized SGL: Not Supported 00:28:30.755 SGL Metadata Address: Not Supported 00:28:30.755 SGL Offset: Supported 00:28:30.755 Transport SGL Data Block: Not Supported 00:28:30.755 Replay Protected Memory Block: Not Supported 00:28:30.755 00:28:30.755 Firmware Slot Information 00:28:30.755 ========================= 00:28:30.755 Active slot: 1 00:28:30.755 Slot 1 Firmware Revision: 25.01 00:28:30.755 00:28:30.755 00:28:30.755 Commands Supported and Effects 00:28:30.755 ============================== 00:28:30.755 Admin Commands 00:28:30.755 -------------- 00:28:30.755 Get Log Page (02h): Supported 00:28:30.755 Identify (06h): Supported 00:28:30.755 Abort (08h): Supported 00:28:30.755 Set Features (09h): Supported 00:28:30.755 Get Features (0Ah): Supported 00:28:30.755 Asynchronous Event Request (0Ch): Supported 00:28:30.755 Keep Alive (18h): Supported 00:28:30.755 I/O Commands 00:28:30.755 ------------ 00:28:30.755 Flush (00h): Supported LBA-Change 00:28:30.755 Write (01h): Supported LBA-Change 00:28:30.755 Read (02h): Supported 00:28:30.755 Compare (05h): Supported 00:28:30.755 Write Zeroes (08h): Supported LBA-Change 00:28:30.755 Dataset Management (09h): Supported LBA-Change 00:28:30.755 Copy (19h): Supported LBA-Change 00:28:30.755 00:28:30.755 Error Log 00:28:30.755 ========= 00:28:30.755 00:28:30.755 Arbitration 00:28:30.755 =========== 00:28:30.755 Arbitration Burst: 1 00:28:30.755 00:28:30.755 Power Management 00:28:30.755 ================ 00:28:30.755 Number of Power States: 1 00:28:30.755 Current Power State: Power State #0 00:28:30.755 Power State #0: 00:28:30.755 Max Power: 0.00 W 00:28:30.755 Non-Operational State: Operational 00:28:30.755 Entry Latency: Not Reported 00:28:30.755 Exit Latency: Not Reported 00:28:30.755 Relative Read Throughput: 0 00:28:30.755 Relative Read Latency: 0 00:28:30.755 Relative Write Throughput: 0 00:28:30.755 Relative Write Latency: 0 00:28:30.755 Idle Power: Not Reported 00:28:30.755 Active Power: Not Reported 00:28:30.755 Non-Operational Permissive Mode: Not Supported 00:28:30.755 00:28:30.755 Health Information 00:28:30.755 ================== 00:28:30.755 Critical Warnings: 00:28:30.755 Available Spare Space: OK 00:28:30.755 Temperature: OK 00:28:30.755 Device Reliability: OK 00:28:30.755 Read Only: No 00:28:30.755 Volatile Memory Backup: OK 00:28:30.755 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:30.755 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:28:30.755 Available Spare: 0% 00:28:30.755 Available Spare Threshold: 0% 00:28:30.755 Life Percentage Used:[2024-10-28 15:24:17.526769] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.755 [2024-10-28 15:24:17.526786] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x131e690) 00:28:30.755 [2024-10-28 15:24:17.526799] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.755 [2024-10-28 15:24:17.526824] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380b80, cid 7, qid 0 00:28:30.755 [2024-10-28 15:24:17.526938] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.755 [2024-10-28 15:24:17.526952] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.755 [2024-10-28 15:24:17.526959] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.755 [2024-10-28 15:24:17.526967] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380b80) on tqpair=0x131e690 00:28:30.755 [2024-10-28 15:24:17.527016] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:28:30.755 [2024-10-28 15:24:17.527038] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380100) on tqpair=0x131e690 00:28:30.755 [2024-10-28 15:24:17.527049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.756 [2024-10-28 15:24:17.527059] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380280) on tqpair=0x131e690 00:28:30.756 [2024-10-28 15:24:17.527068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.756 [2024-10-28 15:24:17.527077] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380400) on tqpair=0x131e690 00:28:30.756 [2024-10-28 15:24:17.527086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.756 [2024-10-28 15:24:17.527095] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380580) on tqpair=0x131e690 00:28:30.756 [2024-10-28 15:24:17.527103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.756 [2024-10-28 15:24:17.527116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.756 [2024-10-28 15:24:17.527125] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.756 [2024-10-28 15:24:17.527131] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131e690) 00:28:30.756 [2024-10-28 15:24:17.527143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.756 [2024-10-28 15:24:17.527168] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380580, cid 3, qid 0 00:28:30.756 [2024-10-28 15:24:17.527267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.756 [2024-10-28 15:24:17.527283] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.756 [2024-10-28 15:24:17.527291] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.756 [2024-10-28 15:24:17.527298] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380580) on tqpair=0x131e690 00:28:30.756 [2024-10-28 15:24:17.527310] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.756 [2024-10-28 15:24:17.527318] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.756 [2024-10-28 15:24:17.527326] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131e690) 00:28:30.756 [2024-10-28 15:24:17.527337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.756 [2024-10-28 15:24:17.527367] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380580, cid 3, qid 0 00:28:30.756 [2024-10-28 15:24:17.527485] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.756 [2024-10-28 15:24:17.527500] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.756 [2024-10-28 15:24:17.527507] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.756 [2024-10-28 15:24:17.527515] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380580) on tqpair=0x131e690 00:28:30.756 [2024-10-28 15:24:17.527527] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:28:30.756 [2024-10-28 15:24:17.527537] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:28:30.756 [2024-10-28 15:24:17.527555] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.756 [2024-10-28 15:24:17.527565] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.756 [2024-10-28 15:24:17.527572] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131e690) 00:28:30.756 [2024-10-28 15:24:17.527583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.756 [2024-10-28 15:24:17.527606] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380580, cid 3, qid 0 00:28:30.756 [2024-10-28 15:24:17.527707] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.756 [2024-10-28 15:24:17.527722] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.756 [2024-10-28 15:24:17.527730] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.756 [2024-10-28 15:24:17.527737] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380580) on tqpair=0x131e690 00:28:30.756 [2024-10-28 15:24:17.527755] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.756 [2024-10-28 15:24:17.527765] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.756 [2024-10-28 15:24:17.527772] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131e690) 00:28:30.756 [2024-10-28 15:24:17.527784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.756 [2024-10-28 15:24:17.527807] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380580, cid 3, qid 0 00:28:30.756 [2024-10-28 15:24:17.527898] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.756 [2024-10-28 15:24:17.527911] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.756 [2024-10-28 15:24:17.527919] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.756 [2024-10-28 15:24:17.527926] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380580) on tqpair=0x131e690 00:28:30.756 [2024-10-28 15:24:17.527944] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.756 [2024-10-28 15:24:17.527954] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.756 [2024-10-28 15:24:17.527961] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131e690) 00:28:30.756 [2024-10-28 15:24:17.527972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.756 [2024-10-28 15:24:17.527994] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380580, cid 3, qid 0 00:28:30.756 [2024-10-28 15:24:17.528092] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.756 [2024-10-28 15:24:17.528108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.756 [2024-10-28 15:24:17.528115] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.756 [2024-10-28 15:24:17.528123] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380580) on tqpair=0x131e690 00:28:30.756 [2024-10-28 15:24:17.528141] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.756 [2024-10-28 15:24:17.528151] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.756 [2024-10-28 15:24:17.528158] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131e690) 00:28:30.756 [2024-10-28 15:24:17.528170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.756 [2024-10-28 15:24:17.528192] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380580, cid 3, qid 0 00:28:30.756 [2024-10-28 15:24:17.528287] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.756 [2024-10-28 15:24:17.528302] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.756 [2024-10-28 15:24:17.528314] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.756 [2024-10-28 15:24:17.528322] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380580) on tqpair=0x131e690 00:28:30.756 [2024-10-28 15:24:17.528340] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.756 [2024-10-28 15:24:17.528351] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.756 [2024-10-28 15:24:17.528358] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131e690) 00:28:30.756 [2024-10-28 15:24:17.528369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.756 [2024-10-28 15:24:17.528392] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380580, cid 3, qid 0 00:28:30.756 [2024-10-28 15:24:17.528484] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.756 [2024-10-28 15:24:17.528500] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.756 [2024-10-28 15:24:17.528507] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.756 [2024-10-28 15:24:17.528515] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380580) on tqpair=0x131e690 00:28:30.756 [2024-10-28 15:24:17.528533] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.756 [2024-10-28 15:24:17.528543] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.756 [2024-10-28 15:24:17.528550] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131e690) 00:28:30.756 [2024-10-28 15:24:17.528561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.756 [2024-10-28 15:24:17.528584] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380580, cid 3, qid 0 00:28:30.756 [2024-10-28 15:24:17.532667] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.756 [2024-10-28 15:24:17.532685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.756 [2024-10-28 15:24:17.532693] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.756 [2024-10-28 15:24:17.532700] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380580) on tqpair=0x131e690 00:28:30.756 [2024-10-28 15:24:17.532720] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:30.756 [2024-10-28 15:24:17.532730] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:30.756 [2024-10-28 15:24:17.532738] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x131e690) 00:28:30.756 [2024-10-28 15:24:17.532750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.756 [2024-10-28 15:24:17.532774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1380580, cid 3, qid 0 00:28:30.756 [2024-10-28 15:24:17.532878] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:30.756 [2024-10-28 15:24:17.532891] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:30.756 [2024-10-28 15:24:17.532899] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:30.756 [2024-10-28 15:24:17.532906] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1380580) on tqpair=0x131e690 00:28:30.756 [2024-10-28 15:24:17.532920] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:28:30.756 0% 00:28:30.756 Data Units Read: 0 00:28:30.756 Data Units Written: 0 00:28:30.756 Host Read Commands: 0 00:28:30.756 Host Write Commands: 0 00:28:30.756 Controller Busy Time: 0 minutes 00:28:30.756 Power Cycles: 0 00:28:30.756 Power On Hours: 0 hours 00:28:30.756 Unsafe Shutdowns: 0 00:28:30.756 Unrecoverable Media Errors: 0 00:28:30.756 Lifetime Error Log Entries: 0 00:28:30.756 Warning Temperature Time: 0 minutes 00:28:30.756 Critical Temperature Time: 0 minutes 00:28:30.756 00:28:30.756 Number of Queues 00:28:30.756 ================ 00:28:30.756 Number of I/O Submission Queues: 127 00:28:30.756 Number of I/O Completion Queues: 127 00:28:30.756 00:28:30.756 Active Namespaces 00:28:30.756 ================= 00:28:30.756 Namespace ID:1 00:28:30.756 Error Recovery Timeout: Unlimited 00:28:30.756 Command Set Identifier: NVM (00h) 00:28:30.756 Deallocate: Supported 00:28:30.756 Deallocated/Unwritten Error: Not Supported 00:28:30.756 Deallocated Read Value: Unknown 00:28:30.756 Deallocate in Write Zeroes: Not Supported 00:28:30.756 Deallocated Guard Field: 0xFFFF 00:28:30.756 Flush: Supported 00:28:30.756 Reservation: Supported 00:28:30.756 Namespace Sharing Capabilities: Multiple Controllers 00:28:30.756 Size (in LBAs): 131072 (0GiB) 00:28:30.756 Capacity (in LBAs): 131072 (0GiB) 00:28:30.756 Utilization (in LBAs): 131072 (0GiB) 00:28:30.756 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:30.756 EUI64: ABCDEF0123456789 00:28:30.756 UUID: 22f85353-4f5c-4299-9069-c5bf58cbf97c 00:28:30.756 Thin Provisioning: Not Supported 00:28:30.756 Per-NS Atomic Units: Yes 00:28:30.757 Atomic Boundary Size (Normal): 0 00:28:30.757 Atomic Boundary Size (PFail): 0 00:28:30.757 Atomic Boundary Offset: 0 00:28:30.757 Maximum Single Source Range Length: 65535 00:28:30.757 Maximum Copy Length: 65535 00:28:30.757 Maximum Source Range Count: 1 00:28:30.757 NGUID/EUI64 Never Reused: No 00:28:30.757 Namespace Write Protected: No 00:28:30.757 Number of LBA Formats: 1 00:28:30.757 Current LBA Format: LBA Format #00 00:28:30.757 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:30.757 00:28:30.757 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:30.757 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:30.757 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.757 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:30.757 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.757 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:30.757 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:30.757 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:30.757 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:28:30.757 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:30.757 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:28:30.757 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:30.757 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:30.757 rmmod nvme_tcp 00:28:30.757 rmmod nvme_fabrics 00:28:30.757 rmmod nvme_keyring 00:28:30.757 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:30.757 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:28:30.757 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:28:30.757 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3258881 ']' 00:28:30.757 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3258881 00:28:30.757 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 3258881 ']' 00:28:30.757 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 3258881 00:28:30.757 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:28:31.015 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:31.015 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3258881 00:28:31.015 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:31.015 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:31.016 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3258881' 00:28:31.016 killing process with pid 3258881 00:28:31.016 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 3258881 00:28:31.016 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 3258881 00:28:31.275 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:31.275 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:31.275 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:31.275 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:28:31.275 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:28:31.275 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:31.275 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:28:31.275 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:31.275 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:31.275 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.275 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:31.275 15:24:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.185 15:24:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:33.185 00:28:33.185 real 0m6.872s 00:28:33.185 user 0m5.309s 00:28:33.185 sys 0m2.858s 00:28:33.185 15:24:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:33.185 15:24:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:33.185 ************************************ 00:28:33.185 END TEST nvmf_identify 00:28:33.185 ************************************ 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.446 ************************************ 00:28:33.446 START TEST nvmf_perf 00:28:33.446 ************************************ 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:33.446 * Looking for test storage... 00:28:33.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1689 -- # lcov --version 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:28:33.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.446 --rc genhtml_branch_coverage=1 00:28:33.446 --rc genhtml_function_coverage=1 00:28:33.446 --rc genhtml_legend=1 00:28:33.446 --rc geninfo_all_blocks=1 00:28:33.446 --rc geninfo_unexecuted_blocks=1 00:28:33.446 00:28:33.446 ' 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:28:33.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.446 --rc genhtml_branch_coverage=1 00:28:33.446 --rc genhtml_function_coverage=1 00:28:33.446 --rc genhtml_legend=1 00:28:33.446 --rc geninfo_all_blocks=1 00:28:33.446 --rc geninfo_unexecuted_blocks=1 00:28:33.446 00:28:33.446 ' 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:28:33.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.446 --rc genhtml_branch_coverage=1 00:28:33.446 --rc genhtml_function_coverage=1 00:28:33.446 --rc genhtml_legend=1 00:28:33.446 --rc geninfo_all_blocks=1 00:28:33.446 --rc geninfo_unexecuted_blocks=1 00:28:33.446 00:28:33.446 ' 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:28:33.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.446 --rc genhtml_branch_coverage=1 00:28:33.446 --rc genhtml_function_coverage=1 00:28:33.446 --rc genhtml_legend=1 00:28:33.446 --rc geninfo_all_blocks=1 00:28:33.446 --rc geninfo_unexecuted_blocks=1 00:28:33.446 00:28:33.446 ' 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.446 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:33.447 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.447 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:28:33.447 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:33.447 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:33.447 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:33.447 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.447 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.447 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:33.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:33.447 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:33.447 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:33.447 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:33.447 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:33.447 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:33.447 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:33.447 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:33.447 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:33.447 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:33.447 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:33.707 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:33.707 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:33.707 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.707 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:33.707 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.707 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:33.707 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:33.707 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:33.707 15:24:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:36.996 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:36.996 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:36.996 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:36.996 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:36.996 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:36.996 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:36.996 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:36.996 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:28:36.996 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:36.996 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:28:36.996 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:28:36.996 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:28:36.996 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:28:36.996 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:28:36.996 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:36.996 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:36.996 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:36.996 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:36.996 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:36.997 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:36.997 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:36.997 Found net devices under 0000:84:00.0: cvl_0_0 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:36.997 Found net devices under 0000:84:00.1: cvl_0_1 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:36.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:36.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:28:36.997 00:28:36.997 --- 10.0.0.2 ping statistics --- 00:28:36.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.997 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:36.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:36.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:28:36.997 00:28:36.997 --- 10.0.0.1 ping statistics --- 00:28:36.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.997 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3261112 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3261112 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 3261112 ']' 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:36.997 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:36.997 [2024-10-28 15:24:23.525340] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:28:36.997 [2024-10-28 15:24:23.525517] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:36.997 [2024-10-28 15:24:23.711933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:36.997 [2024-10-28 15:24:23.835935] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:36.997 [2024-10-28 15:24:23.836046] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:36.997 [2024-10-28 15:24:23.836081] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:36.997 [2024-10-28 15:24:23.836124] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:36.997 [2024-10-28 15:24:23.836140] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:36.997 [2024-10-28 15:24:23.839397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.998 [2024-10-28 15:24:23.839498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:36.998 [2024-10-28 15:24:23.839597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:36.998 [2024-10-28 15:24:23.839601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.255 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:37.255 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:28:37.255 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:37.255 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:37.255 15:24:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:37.255 15:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:37.255 15:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:37.255 15:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:40.532 15:24:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:40.532 15:24:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:40.789 15:24:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:82:00.0 00:28:40.789 15:24:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:41.353 15:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:41.353 15:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:82:00.0 ']' 00:28:41.353 15:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:41.353 15:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:41.353 15:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:42.285 [2024-10-28 15:24:28.801253] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:42.285 15:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:42.543 15:24:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:42.543 15:24:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:42.801 15:24:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:42.801 15:24:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:43.067 15:24:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:43.632 [2024-10-28 15:24:30.435230] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:43.632 15:24:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:44.564 15:24:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:82:00.0 ']' 00:28:44.564 15:24:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:28:44.564 15:24:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:44.564 15:24:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:28:45.496 Initializing NVMe Controllers 00:28:45.496 Attached to NVMe Controller at 0000:82:00.0 [8086:0a54] 00:28:45.496 Associating PCIE (0000:82:00.0) NSID 1 with lcore 0 00:28:45.496 Initialization complete. Launching workers. 00:28:45.496 ======================================================== 00:28:45.496 Latency(us) 00:28:45.496 Device Information : IOPS MiB/s Average min max 00:28:45.496 PCIE (0000:82:00.0) NSID 1 from core 0: 85086.15 332.37 375.73 41.04 4565.93 00:28:45.496 ======================================================== 00:28:45.496 Total : 85086.15 332.37 375.73 41.04 4565.93 00:28:45.496 00:28:45.753 15:24:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:47.125 Initializing NVMe Controllers 00:28:47.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:47.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:47.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:47.125 Initialization complete. Launching workers. 00:28:47.125 ======================================================== 00:28:47.125 Latency(us) 00:28:47.125 Device Information : IOPS MiB/s Average min max 00:28:47.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 105.63 0.41 9458.32 137.56 45842.40 00:28:47.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 54.81 0.21 18807.10 6942.13 47909.67 00:28:47.125 ======================================================== 00:28:47.125 Total : 160.44 0.63 12652.00 137.56 47909.67 00:28:47.125 00:28:47.125 15:24:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:48.059 Initializing NVMe Controllers 00:28:48.059 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:48.059 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:48.059 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:48.059 Initialization complete. Launching workers. 00:28:48.059 ======================================================== 00:28:48.059 Latency(us) 00:28:48.059 Device Information : IOPS MiB/s Average min max 00:28:48.059 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8481.47 33.13 3774.09 618.54 7709.30 00:28:48.059 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3869.55 15.12 8324.27 5403.56 16440.82 00:28:48.059 ======================================================== 00:28:48.059 Total : 12351.01 48.25 5199.65 618.54 16440.82 00:28:48.059 00:28:48.059 15:24:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:48.059 15:24:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:48.059 15:24:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:50.585 Initializing NVMe Controllers 00:28:50.585 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:50.585 Controller IO queue size 128, less than required. 00:28:50.585 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:50.585 Controller IO queue size 128, less than required. 00:28:50.585 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:50.585 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:50.585 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:50.585 Initialization complete. Launching workers. 00:28:50.585 ======================================================== 00:28:50.585 Latency(us) 00:28:50.585 Device Information : IOPS MiB/s Average min max 00:28:50.585 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1362.30 340.58 96546.46 66507.86 173355.47 00:28:50.585 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 569.42 142.35 233352.83 110338.13 352352.49 00:28:50.585 ======================================================== 00:28:50.585 Total : 1931.72 482.93 136873.18 66507.86 352352.49 00:28:50.585 00:28:50.585 15:24:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:50.843 No valid NVMe controllers or AIO or URING devices found 00:28:50.843 Initializing NVMe Controllers 00:28:50.843 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:50.843 Controller IO queue size 128, less than required. 00:28:50.843 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:50.843 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:50.843 Controller IO queue size 128, less than required. 00:28:50.843 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:50.843 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:50.843 WARNING: Some requested NVMe devices were skipped 00:28:50.843 15:24:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:53.371 Initializing NVMe Controllers 00:28:53.371 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:53.371 Controller IO queue size 128, less than required. 00:28:53.371 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:53.371 Controller IO queue size 128, less than required. 00:28:53.371 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:53.371 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:53.371 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:53.371 Initialization complete. Launching workers. 00:28:53.371 00:28:53.371 ==================== 00:28:53.371 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:53.371 TCP transport: 00:28:53.371 polls: 7728 00:28:53.371 idle_polls: 5280 00:28:53.371 sock_completions: 2448 00:28:53.371 nvme_completions: 4945 00:28:53.371 submitted_requests: 7444 00:28:53.371 queued_requests: 1 00:28:53.371 00:28:53.371 ==================== 00:28:53.371 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:53.371 TCP transport: 00:28:53.371 polls: 8015 00:28:53.371 idle_polls: 5496 00:28:53.371 sock_completions: 2519 00:28:53.371 nvme_completions: 4999 00:28:53.371 submitted_requests: 7482 00:28:53.371 queued_requests: 1 00:28:53.371 ======================================================== 00:28:53.371 Latency(us) 00:28:53.371 Device Information : IOPS MiB/s Average min max 00:28:53.371 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1234.50 308.62 106530.13 53839.53 163848.63 00:28:53.371 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1247.98 312.00 104066.47 57936.47 164066.55 00:28:53.371 ======================================================== 00:28:53.371 Total : 2482.48 620.62 105291.61 53839.53 164066.55 00:28:53.371 00:28:53.371 15:24:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:53.371 15:24:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:53.938 15:24:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:28:53.938 15:24:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:28:53.938 15:24:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:28:53.938 15:24:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:53.938 15:24:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:28:53.938 15:24:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:53.938 15:24:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:28:53.938 15:24:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:53.938 15:24:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:53.938 rmmod nvme_tcp 00:28:53.938 rmmod nvme_fabrics 00:28:53.938 rmmod nvme_keyring 00:28:53.938 15:24:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:53.938 15:24:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:28:53.938 15:24:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:28:53.938 15:24:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3261112 ']' 00:28:53.938 15:24:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3261112 00:28:53.938 15:24:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 3261112 ']' 00:28:53.938 15:24:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 3261112 00:28:53.938 15:24:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:28:53.938 15:24:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:53.938 15:24:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3261112 00:28:53.938 15:24:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:53.938 15:24:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:53.938 15:24:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3261112' 00:28:53.938 killing process with pid 3261112 00:28:53.938 15:24:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 3261112 00:28:53.938 15:24:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 3261112 00:28:55.836 15:24:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:55.836 15:24:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:55.836 15:24:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:55.836 15:24:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:28:55.836 15:24:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:28:55.837 15:24:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:28:55.837 15:24:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:55.837 15:24:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:55.837 15:24:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:55.837 15:24:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.837 15:24:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:55.837 15:24:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:57.739 15:24:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:57.739 00:28:57.739 real 0m24.372s 00:28:57.739 user 1m15.395s 00:28:57.739 sys 0m6.866s 00:28:57.739 15:24:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:57.739 15:24:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:57.739 ************************************ 00:28:57.739 END TEST nvmf_perf 00:28:57.739 ************************************ 00:28:57.739 15:24:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:57.739 15:24:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:57.739 15:24:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:57.739 15:24:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.739 ************************************ 00:28:57.739 START TEST nvmf_fio_host 00:28:57.739 ************************************ 00:28:57.739 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:57.739 * Looking for test storage... 00:28:57.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:57.739 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:28:57.999 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1689 -- # lcov --version 00:28:57.999 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:28:57.999 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:28:57.999 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:57.999 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:57.999 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:57.999 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:57.999 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:57.999 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:57.999 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:57.999 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:57.999 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:57.999 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:57.999 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:57.999 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:28:57.999 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:28:58.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.000 --rc genhtml_branch_coverage=1 00:28:58.000 --rc genhtml_function_coverage=1 00:28:58.000 --rc genhtml_legend=1 00:28:58.000 --rc geninfo_all_blocks=1 00:28:58.000 --rc geninfo_unexecuted_blocks=1 00:28:58.000 00:28:58.000 ' 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:28:58.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.000 --rc genhtml_branch_coverage=1 00:28:58.000 --rc genhtml_function_coverage=1 00:28:58.000 --rc genhtml_legend=1 00:28:58.000 --rc geninfo_all_blocks=1 00:28:58.000 --rc geninfo_unexecuted_blocks=1 00:28:58.000 00:28:58.000 ' 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:28:58.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.000 --rc genhtml_branch_coverage=1 00:28:58.000 --rc genhtml_function_coverage=1 00:28:58.000 --rc genhtml_legend=1 00:28:58.000 --rc geninfo_all_blocks=1 00:28:58.000 --rc geninfo_unexecuted_blocks=1 00:28:58.000 00:28:58.000 ' 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:28:58.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.000 --rc genhtml_branch_coverage=1 00:28:58.000 --rc genhtml_function_coverage=1 00:28:58.000 --rc genhtml_legend=1 00:28:58.000 --rc geninfo_all_blocks=1 00:28:58.000 --rc geninfo_unexecuted_blocks=1 00:28:58.000 00:28:58.000 ' 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.000 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.001 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.001 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:58.001 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.001 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:28:58.001 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:58.001 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:58.001 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:58.001 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:58.001 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:58.001 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:58.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:58.001 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:58.001 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:58.001 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:58.001 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:58.001 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:28:58.001 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:58.001 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:58.001 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:58.001 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:58.001 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:58.001 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.001 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.001 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.001 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:58.001 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:58.001 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:28:58.001 15:24:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.292 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:01.292 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:29:01.292 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:01.292 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:01.292 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:01.292 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:01.292 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:01.292 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:29:01.292 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:01.292 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:01.293 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:01.293 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:01.293 Found net devices under 0000:84:00.0: cvl_0_0 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:01.293 Found net devices under 0000:84:00.1: cvl_0_1 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:01.293 15:24:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:01.293 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:01.293 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:01.293 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:01.293 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:01.293 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:01.293 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:01.293 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:01.293 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:01.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:01.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:29:01.293 00:29:01.293 --- 10.0.0.2 ping statistics --- 00:29:01.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.293 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:29:01.293 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:01.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:01.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:29:01.293 00:29:01.293 --- 10.0.0.1 ping statistics --- 00:29:01.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.293 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:29:01.293 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:01.293 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:29:01.293 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:01.293 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:01.293 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:01.293 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:01.293 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:01.293 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:01.293 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:01.293 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:01.293 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:01.293 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:01.293 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.293 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3265357 00:29:01.293 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:01.293 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:01.293 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3265357 00:29:01.293 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 3265357 ']' 00:29:01.293 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:01.293 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:01.293 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:01.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:01.294 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:01.294 15:24:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.552 [2024-10-28 15:24:48.238346] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:29:01.552 [2024-10-28 15:24:48.238517] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:01.552 [2024-10-28 15:24:48.372452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:01.811 [2024-10-28 15:24:48.441735] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:01.811 [2024-10-28 15:24:48.441794] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:01.811 [2024-10-28 15:24:48.441811] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:01.811 [2024-10-28 15:24:48.441825] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:01.811 [2024-10-28 15:24:48.441837] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:01.811 [2024-10-28 15:24:48.443745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.811 [2024-10-28 15:24:48.443803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.811 [2024-10-28 15:24:48.443799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:01.811 [2024-10-28 15:24:48.443773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:02.743 15:24:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:02.743 15:24:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:29:02.743 15:24:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:03.001 [2024-10-28 15:24:49.658970] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:03.001 15:24:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:03.001 15:24:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:03.001 15:24:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.001 15:24:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:03.639 Malloc1 00:29:03.639 15:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:04.272 15:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:04.529 15:24:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:05.463 [2024-10-28 15:24:51.969362] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:05.463 15:24:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:05.463 15:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:05.463 15:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:05.463 15:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:05.463 15:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:05.463 15:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:05.463 15:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:05.463 15:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:05.463 15:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:05.463 15:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:05.463 15:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:05.463 15:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:05.463 15:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:05.463 15:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:05.722 15:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:05.722 15:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:05.722 15:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:05.722 15:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:05.722 15:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:05.722 15:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:05.722 15:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:05.722 15:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:05.722 15:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:05.722 15:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:05.722 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:05.722 fio-3.35 00:29:05.722 Starting 1 thread 00:29:08.248 00:29:08.248 test: (groupid=0, jobs=1): err= 0: pid=3265978: Mon Oct 28 15:24:54 2024 00:29:08.248 read: IOPS=8931, BW=34.9MiB/s (36.6MB/s)(70.0MiB/2007msec) 00:29:08.248 slat (usec): min=2, max=272, avg= 3.16, stdev= 2.72 00:29:08.248 clat (usec): min=2659, max=12666, avg=7800.79, stdev=629.96 00:29:08.248 lat (usec): min=2689, max=12669, avg=7803.95, stdev=629.84 00:29:08.248 clat percentiles (usec): 00:29:08.248 | 1.00th=[ 6390], 5.00th=[ 6783], 10.00th=[ 7046], 20.00th=[ 7308], 00:29:08.248 | 30.00th=[ 7504], 40.00th=[ 7635], 50.00th=[ 7832], 60.00th=[ 7963], 00:29:08.248 | 70.00th=[ 8094], 80.00th=[ 8356], 90.00th=[ 8586], 95.00th=[ 8717], 00:29:08.248 | 99.00th=[ 9110], 99.50th=[ 9372], 99.90th=[10290], 99.95th=[12256], 00:29:08.248 | 99.99th=[12649] 00:29:08.248 bw ( KiB/s): min=34864, max=36328, per=99.97%, avg=35718.00, stdev=614.09, samples=4 00:29:08.248 iops : min= 8716, max= 9082, avg=8929.50, stdev=153.52, samples=4 00:29:08.248 write: IOPS=8946, BW=34.9MiB/s (36.6MB/s)(70.1MiB/2007msec); 0 zone resets 00:29:08.248 slat (usec): min=2, max=253, avg= 3.27, stdev= 2.09 00:29:08.248 clat (usec): min=2152, max=12415, avg=6484.49, stdev=539.32 00:29:08.248 lat (usec): min=2161, max=12418, avg=6487.76, stdev=539.25 00:29:08.248 clat percentiles (usec): 00:29:08.248 | 1.00th=[ 5342], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 6063], 00:29:08.248 | 30.00th=[ 6194], 40.00th=[ 6390], 50.00th=[ 6456], 60.00th=[ 6587], 00:29:08.248 | 70.00th=[ 6718], 80.00th=[ 6915], 90.00th=[ 7111], 95.00th=[ 7308], 00:29:08.248 | 99.00th=[ 7635], 99.50th=[ 7963], 99.90th=[ 9765], 99.95th=[11600], 00:29:08.248 | 99.99th=[12387] 00:29:08.248 bw ( KiB/s): min=35648, max=35904, per=100.00%, avg=35792.00, stdev=131.94, samples=4 00:29:08.248 iops : min= 8912, max= 8976, avg=8948.00, stdev=32.98, samples=4 00:29:08.248 lat (msec) : 4=0.09%, 10=99.80%, 20=0.11% 00:29:08.248 cpu : usr=69.94%, sys=28.66%, ctx=57, majf=0, minf=31 00:29:08.248 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:08.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:08.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:08.248 issued rwts: total=17926,17956,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:08.248 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:08.248 00:29:08.248 Run status group 0 (all jobs): 00:29:08.248 READ: bw=34.9MiB/s (36.6MB/s), 34.9MiB/s-34.9MiB/s (36.6MB/s-36.6MB/s), io=70.0MiB (73.4MB), run=2007-2007msec 00:29:08.248 WRITE: bw=34.9MiB/s (36.6MB/s), 34.9MiB/s-34.9MiB/s (36.6MB/s-36.6MB/s), io=70.1MiB (73.5MB), run=2007-2007msec 00:29:08.248 15:24:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:08.248 15:24:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:08.248 15:24:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:08.248 15:24:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:08.248 15:24:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:08.248 15:24:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:08.248 15:24:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:08.248 15:24:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:08.248 15:24:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:08.248 15:24:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:08.248 15:24:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:08.248 15:24:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:08.248 15:24:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:08.248 15:24:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:08.248 15:24:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:08.248 15:24:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:08.248 15:24:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:08.248 15:24:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:08.248 15:24:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:08.248 15:24:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:08.248 15:24:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:08.248 15:24:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:08.506 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:08.506 fio-3.35 00:29:08.506 Starting 1 thread 00:29:11.036 00:29:11.036 test: (groupid=0, jobs=1): err= 0: pid=3266420: Mon Oct 28 15:24:57 2024 00:29:11.036 read: IOPS=8210, BW=128MiB/s (135MB/s)(257MiB/2006msec) 00:29:11.036 slat (usec): min=2, max=131, avg= 4.32, stdev= 2.50 00:29:11.036 clat (usec): min=1996, max=17608, avg=8990.02, stdev=2125.09 00:29:11.036 lat (usec): min=2000, max=17613, avg=8994.34, stdev=2125.13 00:29:11.036 clat percentiles (usec): 00:29:11.036 | 1.00th=[ 4752], 5.00th=[ 5735], 10.00th=[ 6325], 20.00th=[ 7177], 00:29:11.036 | 30.00th=[ 7832], 40.00th=[ 8356], 50.00th=[ 8979], 60.00th=[ 9503], 00:29:11.036 | 70.00th=[10028], 80.00th=[10552], 90.00th=[11469], 95.00th=[12649], 00:29:11.036 | 99.00th=[15401], 99.50th=[16319], 99.90th=[16909], 99.95th=[17171], 00:29:11.036 | 99.99th=[17433] 00:29:11.036 bw ( KiB/s): min=60288, max=73920, per=51.26%, avg=67344.00, stdev=6234.14, samples=4 00:29:11.036 iops : min= 3768, max= 4620, avg=4209.00, stdev=389.63, samples=4 00:29:11.036 write: IOPS=4775, BW=74.6MiB/s (78.2MB/s)(137MiB/1842msec); 0 zone resets 00:29:11.036 slat (usec): min=30, max=194, avg=38.67, stdev= 6.56 00:29:11.036 clat (usec): min=3928, max=18278, avg=11621.08, stdev=1878.58 00:29:11.036 lat (usec): min=3978, max=18313, avg=11659.75, stdev=1878.64 00:29:11.036 clat percentiles (usec): 00:29:11.036 | 1.00th=[ 7767], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[10028], 00:29:11.036 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11469], 60.00th=[11994], 00:29:11.036 | 70.00th=[12649], 80.00th=[13173], 90.00th=[14091], 95.00th=[14877], 00:29:11.036 | 99.00th=[15926], 99.50th=[16450], 99.90th=[17695], 99.95th=[17695], 00:29:11.036 | 99.99th=[18220] 00:29:11.036 bw ( KiB/s): min=63040, max=77536, per=91.66%, avg=70040.00, stdev=6816.49, samples=4 00:29:11.036 iops : min= 3940, max= 4846, avg=4377.50, stdev=426.03, samples=4 00:29:11.036 lat (msec) : 2=0.01%, 4=0.21%, 10=51.50%, 20=48.29% 00:29:11.036 cpu : usr=82.00%, sys=17.11%, ctx=28, majf=0, minf=60 00:29:11.036 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:29:11.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:11.036 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:11.036 issued rwts: total=16471,8797,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:11.036 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:11.036 00:29:11.036 Run status group 0 (all jobs): 00:29:11.036 READ: bw=128MiB/s (135MB/s), 128MiB/s-128MiB/s (135MB/s-135MB/s), io=257MiB (270MB), run=2006-2006msec 00:29:11.036 WRITE: bw=74.6MiB/s (78.2MB/s), 74.6MiB/s-74.6MiB/s (78.2MB/s-78.2MB/s), io=137MiB (144MB), run=1842-1842msec 00:29:11.036 15:24:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:11.294 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:29:11.294 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:11.294 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:29:11.294 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:29:11.294 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:11.294 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:29:11.294 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:11.294 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:29:11.294 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:11.294 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:11.294 rmmod nvme_tcp 00:29:11.294 rmmod nvme_fabrics 00:29:11.294 rmmod nvme_keyring 00:29:11.554 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:11.554 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:29:11.554 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:29:11.554 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3265357 ']' 00:29:11.554 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3265357 00:29:11.554 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 3265357 ']' 00:29:11.554 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 3265357 00:29:11.554 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:29:11.554 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:11.554 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3265357 00:29:11.554 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:11.554 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:11.554 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3265357' 00:29:11.554 killing process with pid 3265357 00:29:11.554 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 3265357 00:29:11.554 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 3265357 00:29:11.814 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:11.814 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:11.814 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:11.814 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:29:11.814 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:29:11.814 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:11.814 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:29:11.814 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:11.814 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:11.814 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.814 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:11.814 15:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:14.357 00:29:14.357 real 0m16.101s 00:29:14.357 user 0m48.298s 00:29:14.357 sys 0m5.093s 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.357 ************************************ 00:29:14.357 END TEST nvmf_fio_host 00:29:14.357 ************************************ 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.357 ************************************ 00:29:14.357 START TEST nvmf_failover 00:29:14.357 ************************************ 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:14.357 * Looking for test storage... 00:29:14.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1689 -- # lcov --version 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:29:14.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.357 --rc genhtml_branch_coverage=1 00:29:14.357 --rc genhtml_function_coverage=1 00:29:14.357 --rc genhtml_legend=1 00:29:14.357 --rc geninfo_all_blocks=1 00:29:14.357 --rc geninfo_unexecuted_blocks=1 00:29:14.357 00:29:14.357 ' 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:29:14.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.357 --rc genhtml_branch_coverage=1 00:29:14.357 --rc genhtml_function_coverage=1 00:29:14.357 --rc genhtml_legend=1 00:29:14.357 --rc geninfo_all_blocks=1 00:29:14.357 --rc geninfo_unexecuted_blocks=1 00:29:14.357 00:29:14.357 ' 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:29:14.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.357 --rc genhtml_branch_coverage=1 00:29:14.357 --rc genhtml_function_coverage=1 00:29:14.357 --rc genhtml_legend=1 00:29:14.357 --rc geninfo_all_blocks=1 00:29:14.357 --rc geninfo_unexecuted_blocks=1 00:29:14.357 00:29:14.357 ' 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:29:14.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.357 --rc genhtml_branch_coverage=1 00:29:14.357 --rc genhtml_function_coverage=1 00:29:14.357 --rc genhtml_legend=1 00:29:14.357 --rc geninfo_all_blocks=1 00:29:14.357 --rc geninfo_unexecuted_blocks=1 00:29:14.357 00:29:14.357 ' 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:14.357 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:14.358 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:29:14.358 15:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:17.650 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:17.650 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:17.650 Found net devices under 0000:84:00.0: cvl_0_0 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:17.650 Found net devices under 0000:84:00.1: cvl_0_1 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:17.650 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:17.651 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:17.651 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:17.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:17.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:29:17.651 00:29:17.651 --- 10.0.0.2 ping statistics --- 00:29:17.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.651 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:29:17.651 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:17.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:17.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:29:17.651 00:29:17.651 --- 10.0.0.1 ping statistics --- 00:29:17.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.651 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:29:17.651 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:17.651 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:29:17.651 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:17.651 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:17.651 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:17.651 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:17.651 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:17.651 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:17.651 15:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:17.651 15:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:29:17.651 15:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:17.651 15:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:17.651 15:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:17.651 15:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3268767 00:29:17.651 15:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:17.651 15:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3268767 00:29:17.651 15:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3268767 ']' 00:29:17.651 15:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:17.651 15:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:17.651 15:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:17.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:17.651 15:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:17.651 15:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:17.651 [2024-10-28 15:25:04.080228] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:29:17.651 [2024-10-28 15:25:04.080330] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:17.651 [2024-10-28 15:25:04.188772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:17.651 [2024-10-28 15:25:04.305464] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:17.651 [2024-10-28 15:25:04.305581] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:17.651 [2024-10-28 15:25:04.305617] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:17.651 [2024-10-28 15:25:04.305648] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:17.651 [2024-10-28 15:25:04.305695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:17.651 [2024-10-28 15:25:04.308885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:17.651 [2024-10-28 15:25:04.309004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.651 [2024-10-28 15:25:04.308999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:17.651 15:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:17.651 15:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:17.651 15:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:17.651 15:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:17.651 15:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:17.651 15:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:17.651 15:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:18.216 [2024-10-28 15:25:05.020724] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:18.216 15:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:19.150 Malloc0 00:29:19.150 15:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:19.408 15:25:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:19.975 15:25:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:20.539 [2024-10-28 15:25:07.232469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:20.539 15:25:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:20.796 [2024-10-28 15:25:07.525493] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:20.796 15:25:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:21.054 [2024-10-28 15:25:07.854335] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:21.054 15:25:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3269195 00:29:21.054 15:25:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:29:21.054 15:25:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:21.054 15:25:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3269195 /var/tmp/bdevperf.sock 00:29:21.054 15:25:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3269195 ']' 00:29:21.054 15:25:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:21.054 15:25:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:21.054 15:25:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:21.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:21.054 15:25:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:21.054 15:25:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:21.620 15:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:21.620 15:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:21.620 15:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:22.186 NVMe0n1 00:29:22.186 15:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:22.752 00:29:22.752 15:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3269340 00:29:22.752 15:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:22.752 15:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:29:23.684 15:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:24.251 [2024-10-28 15:25:10.945664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257ad0 is same with the state(6) to be set 00:29:24.251 [2024-10-28 15:25:10.945760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257ad0 is same with the state(6) to be set 00:29:24.251 [2024-10-28 15:25:10.945793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257ad0 is same with the state(6) to be set 00:29:24.251 [2024-10-28 15:25:10.945806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257ad0 is same with the state(6) to be set 00:29:24.251 [2024-10-28 15:25:10.945819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257ad0 is same with the state(6) to be set 00:29:24.251 [2024-10-28 15:25:10.945831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257ad0 is same with the state(6) to be set 00:29:24.251 [2024-10-28 15:25:10.945842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257ad0 is same with the state(6) to be set 00:29:24.251 [2024-10-28 15:25:10.945854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257ad0 is same with the state(6) to be set 00:29:24.251 [2024-10-28 15:25:10.945867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257ad0 is same with the state(6) to be set 00:29:24.251 15:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:29:27.533 15:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:27.791 00:29:27.791 15:25:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:28.050 [2024-10-28 15:25:14.857265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.857994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.858005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.858017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.858029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.858040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.858052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.858064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.858075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.858087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.858099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.858111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.858122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.858134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.050 [2024-10-28 15:25:14.858146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.051 [2024-10-28 15:25:14.858161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.051 [2024-10-28 15:25:14.858175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.051 [2024-10-28 15:25:14.858187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.051 [2024-10-28 15:25:14.858199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.051 [2024-10-28 15:25:14.858211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.051 [2024-10-28 15:25:14.858224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.051 [2024-10-28 15:25:14.858236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.051 [2024-10-28 15:25:14.858249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.051 [2024-10-28 15:25:14.858262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.051 [2024-10-28 15:25:14.858274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.051 [2024-10-28 15:25:14.858286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.051 [2024-10-28 15:25:14.858298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.051 [2024-10-28 15:25:14.858310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.051 [2024-10-28 15:25:14.858323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.051 [2024-10-28 15:25:14.858335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.051 [2024-10-28 15:25:14.858347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.051 [2024-10-28 15:25:14.858360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.051 [2024-10-28 15:25:14.858372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.051 [2024-10-28 15:25:14.858384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.051 [2024-10-28 15:25:14.858397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.051 [2024-10-28 15:25:14.858409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.051 [2024-10-28 15:25:14.858421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.051 [2024-10-28 15:25:14.858433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.051 [2024-10-28 15:25:14.858445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.051 [2024-10-28 15:25:14.858457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258580 is same with the state(6) to be set 00:29:28.051 15:25:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:29:31.333 15:25:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:31.591 [2024-10-28 15:25:18.231116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:31.591 15:25:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:29:32.522 15:25:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:33.099 [2024-10-28 15:25:19.724398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111d960 is same with the state(6) to be set 00:29:33.099 15:25:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3269340 00:29:38.363 { 00:29:38.363 "results": [ 00:29:38.363 { 00:29:38.363 "job": "NVMe0n1", 00:29:38.363 "core_mask": "0x1", 00:29:38.363 "workload": "verify", 00:29:38.363 "status": "finished", 00:29:38.363 "verify_range": { 00:29:38.363 "start": 0, 00:29:38.363 "length": 16384 00:29:38.363 }, 00:29:38.363 "queue_depth": 128, 00:29:38.363 "io_size": 4096, 00:29:38.363 "runtime": 15.009291, 00:29:38.363 "iops": 8690.883533406075, 00:29:38.363 "mibps": 33.94876380236748, 00:29:38.363 "io_failed": 8861, 00:29:38.363 "io_timeout": 0, 00:29:38.363 "avg_latency_us": 13765.295839435717, 00:29:38.363 "min_latency_us": 558.2696296296297, 00:29:38.363 "max_latency_us": 15631.54962962963 00:29:38.363 } 00:29:38.363 ], 00:29:38.363 "core_count": 1 00:29:38.363 } 00:29:38.363 15:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3269195 00:29:38.363 15:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3269195 ']' 00:29:38.363 15:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3269195 00:29:38.363 15:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:29:38.363 15:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:38.363 15:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3269195 00:29:38.363 15:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:38.363 15:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:38.363 15:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3269195' 00:29:38.363 killing process with pid 3269195 00:29:38.363 15:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3269195 00:29:38.363 15:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3269195 00:29:38.364 15:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:38.364 [2024-10-28 15:25:07.927354] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:29:38.364 [2024-10-28 15:25:07.927456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3269195 ] 00:29:38.364 [2024-10-28 15:25:08.002247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.364 [2024-10-28 15:25:08.061320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.364 Running I/O for 15 seconds... 00:29:38.364 8646.00 IOPS, 33.77 MiB/s [2024-10-28T14:25:25.231Z] [2024-10-28 15:25:10.946264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:90640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.364 [2024-10-28 15:25:10.946304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.946342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.364 [2024-10-28 15:25:10.946359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.946377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.364 [2024-10-28 15:25:10.946391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.946406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.364 [2024-10-28 15:25:10.946420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.946436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.364 [2024-10-28 15:25:10.946450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.946465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.364 [2024-10-28 15:25:10.946479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.946495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.364 [2024-10-28 15:25:10.946509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.946525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:90696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.364 [2024-10-28 15:25:10.946539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.946555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.364 [2024-10-28 15:25:10.946569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.946584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.364 [2024-10-28 15:25:10.946599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.946614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.364 [2024-10-28 15:25:10.946643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.946686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.364 [2024-10-28 15:25:10.946707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.946723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.364 [2024-10-28 15:25:10.946737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.946754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.364 [2024-10-28 15:25:10.946768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.946783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.364 [2024-10-28 15:25:10.946796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.946812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.364 [2024-10-28 15:25:10.946825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.946841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:90768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.364 [2024-10-28 15:25:10.946854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.946869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.364 [2024-10-28 15:25:10.946883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.946898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.364 [2024-10-28 15:25:10.946912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.946930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.364 [2024-10-28 15:25:10.946948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.946963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.364 [2024-10-28 15:25:10.946977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.946992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.364 [2024-10-28 15:25:10.947005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.947036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:89816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.364 [2024-10-28 15:25:10.947049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.947064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:89824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.364 [2024-10-28 15:25:10.947082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.947098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:89832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.364 [2024-10-28 15:25:10.947111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.947126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:89840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.364 [2024-10-28 15:25:10.947139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.947154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.364 [2024-10-28 15:25:10.947168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.947183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:89856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.364 [2024-10-28 15:25:10.947196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.947211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:89864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.364 [2024-10-28 15:25:10.947225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.947240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.364 [2024-10-28 15:25:10.947253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.947268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:89880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.364 [2024-10-28 15:25:10.947281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.947296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.364 [2024-10-28 15:25:10.947309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.947324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.364 [2024-10-28 15:25:10.947338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.947353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:89904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.364 [2024-10-28 15:25:10.947367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.947382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:89912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.364 [2024-10-28 15:25:10.947395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.947410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:89920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.364 [2024-10-28 15:25:10.947423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.947443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:89928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.364 [2024-10-28 15:25:10.947457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.947472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.364 [2024-10-28 15:25:10.947485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.364 [2024-10-28 15:25:10.947500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:89936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.947513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.947528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:89944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.947542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.947557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:89952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.947570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.947585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.947598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.947613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.947642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.947669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:89976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.947685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.947701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:89984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.947717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.947732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:89992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.947746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.947762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:90000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.947775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.947791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.947804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.947821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.947844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.947861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:90024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.947875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.947891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:90032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.947905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.947921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:90040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.947938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.947969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:90048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.947983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.948005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:90056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.948021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.948038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.948051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.948067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:90072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.948081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.948098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:90080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.948112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.948126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:90088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.948139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.948155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:90096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.948184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.948200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:90104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.948214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.948229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:90112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.948243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.948259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:90120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.948277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.948292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:90128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.948306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.948322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:90136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.948336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.948351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:90144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.948365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.948380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:90152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.948394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.948410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:90160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.948424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.948439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:90168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.948453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.948468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:90176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.948481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.948497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:90184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.948510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.948526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:90192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.948539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.948554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.948568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.948584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:90208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.948597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.948613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:90216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.948626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.948645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.948669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.948686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.948709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.948725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:90240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.948739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.948755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:90248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.948768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.948784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:90256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.948797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.948812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:90264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.365 [2024-10-28 15:25:10.948826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.365 [2024-10-28 15:25:10.948841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.948855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.948870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:90280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.948883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.948898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:90288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.948912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.948934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.948947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.948962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:90304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.948976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.948991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:90312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:90320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:90336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:90344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:90352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:90376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:90384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:90392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:90400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:90408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:90424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:90440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:90456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:90472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:90480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:90488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:90496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:90504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:90520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:90528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:90536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:90552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:90568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.949983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.949998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.366 [2024-10-28 15:25:10.950012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.950027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:90832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.366 [2024-10-28 15:25:10.950041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.950056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:90576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.950069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.950089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.366 [2024-10-28 15:25:10.950103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.366 [2024-10-28 15:25:10.950118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:90592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.367 [2024-10-28 15:25:10.950132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:10.950148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:90600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.367 [2024-10-28 15:25:10.950162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:10.950178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.367 [2024-10-28 15:25:10.950195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:10.950211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.367 [2024-10-28 15:25:10.950225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:10.950240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:90624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.367 [2024-10-28 15:25:10.950254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:10.950269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bd1f0 is same with the state(6) to be set 00:29:38.367 [2024-10-28 15:25:10.950285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.367 [2024-10-28 15:25:10.950297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.367 [2024-10-28 15:25:10.950309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90632 len:8 PRP1 0x0 PRP2 0x0 00:29:38.367 [2024-10-28 15:25:10.950321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:10.950393] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:38.367 [2024-10-28 15:25:10.950436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.367 [2024-10-28 15:25:10.950454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:10.950470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.367 [2024-10-28 15:25:10.950483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:10.950503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.367 [2024-10-28 15:25:10.950516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:10.950530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.367 [2024-10-28 15:25:10.950543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:10.950556] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:38.367 [2024-10-28 15:25:10.950620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2398c00 (9): Bad file descriptor 00:29:38.367 [2024-10-28 15:25:10.953892] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:38.367 [2024-10-28 15:25:10.987223] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:29:38.367 8486.00 IOPS, 33.15 MiB/s [2024-10-28T14:25:25.234Z] 8654.33 IOPS, 33.81 MiB/s [2024-10-28T14:25:25.234Z] 8718.00 IOPS, 34.05 MiB/s [2024-10-28T14:25:25.234Z] 8756.80 IOPS, 34.21 MiB/s [2024-10-28T14:25:25.234Z] [2024-10-28 15:25:14.860359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.367 [2024-10-28 15:25:14.860405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:14.860439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.367 [2024-10-28 15:25:14.860456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:14.860473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.367 [2024-10-28 15:25:14.860487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:14.860503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.367 [2024-10-28 15:25:14.860516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:14.860531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.367 [2024-10-28 15:25:14.860545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:14.860560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.367 [2024-10-28 15:25:14.860574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:14.860589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.367 [2024-10-28 15:25:14.860603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:14.860618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.367 [2024-10-28 15:25:14.860646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:14.860674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.367 [2024-10-28 15:25:14.860689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:14.860705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.367 [2024-10-28 15:25:14.860719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:14.860736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.367 [2024-10-28 15:25:14.860750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:14.860766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.367 [2024-10-28 15:25:14.860780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:14.860796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.367 [2024-10-28 15:25:14.860809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:14.860825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.367 [2024-10-28 15:25:14.860839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:14.860859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.367 [2024-10-28 15:25:14.860874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:14.860890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.367 [2024-10-28 15:25:14.860904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:14.860919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.367 [2024-10-28 15:25:14.860933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:14.860949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.367 [2024-10-28 15:25:14.860963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:14.860978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.367 [2024-10-28 15:25:14.860991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:14.861007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.367 [2024-10-28 15:25:14.861021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:14.861036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.367 [2024-10-28 15:25:14.861050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:14.861065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.367 [2024-10-28 15:25:14.861079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:14.861095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:113776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.367 [2024-10-28 15:25:14.861109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:14.861125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.367 [2024-10-28 15:25:14.861139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:14.861155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.367 [2024-10-28 15:25:14.861169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:14.861185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.367 [2024-10-28 15:25:14.861199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:14.861214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.367 [2024-10-28 15:25:14.861232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.367 [2024-10-28 15:25:14.861248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.368 [2024-10-28 15:25:14.861262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.861277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.368 [2024-10-28 15:25:14.861291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.861306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.368 [2024-10-28 15:25:14.861320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.861335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.368 [2024-10-28 15:25:14.861349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.861364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:113848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.368 [2024-10-28 15:25:14.861378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.861394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.368 [2024-10-28 15:25:14.861408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.861423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.368 [2024-10-28 15:25:14.861437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.861453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.368 [2024-10-28 15:25:14.861466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.861482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:113880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.368 [2024-10-28 15:25:14.861496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.861511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.368 [2024-10-28 15:25:14.861525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.861541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:113896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.368 [2024-10-28 15:25:14.861556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.861571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.368 [2024-10-28 15:25:14.861585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.861604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.368 [2024-10-28 15:25:14.861626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.861642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.368 [2024-10-28 15:25:14.861664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.861681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.368 [2024-10-28 15:25:14.861696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.861711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.368 [2024-10-28 15:25:14.861725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.861741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.368 [2024-10-28 15:25:14.861755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.861770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.368 [2024-10-28 15:25:14.861784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.861799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:113960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.368 [2024-10-28 15:25:14.861814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.861829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.368 [2024-10-28 15:25:14.861842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.861858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.368 [2024-10-28 15:25:14.861871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.861887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.368 [2024-10-28 15:25:14.861900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.861916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.368 [2024-10-28 15:25:14.861930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.861946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.368 [2024-10-28 15:25:14.861960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.861976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.368 [2024-10-28 15:25:14.861993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.862009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.368 [2024-10-28 15:25:14.862022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.862037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.368 [2024-10-28 15:25:14.862051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.862067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.368 [2024-10-28 15:25:14.862081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.862096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.368 [2024-10-28 15:25:14.862115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.862131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.368 [2024-10-28 15:25:14.862145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.862161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.368 [2024-10-28 15:25:14.862174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.862189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.368 [2024-10-28 15:25:14.862203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.862219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.368 [2024-10-28 15:25:14.862233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.862248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.368 [2024-10-28 15:25:14.862262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.862278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.368 [2024-10-28 15:25:14.862291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.862307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.368 [2024-10-28 15:25:14.862321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.368 [2024-10-28 15:25:14.862336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.369 [2024-10-28 15:25:14.862350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.862370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.369 [2024-10-28 15:25:14.862384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.862399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.369 [2024-10-28 15:25:14.862413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.862428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.369 [2024-10-28 15:25:14.862442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.862457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.369 [2024-10-28 15:25:14.862471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.862487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.369 [2024-10-28 15:25:14.862500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.862516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.369 [2024-10-28 15:25:14.862530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.862545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.369 [2024-10-28 15:25:14.862558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.862573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.369 [2024-10-28 15:25:14.862593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.862609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.369 [2024-10-28 15:25:14.862623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.862638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.369 [2024-10-28 15:25:14.862657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.862674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.369 [2024-10-28 15:25:14.862689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.862704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.369 [2024-10-28 15:25:14.862718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.862733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.369 [2024-10-28 15:25:14.862751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.862768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.369 [2024-10-28 15:25:14.862782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.862798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.369 [2024-10-28 15:25:14.862812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.862827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.369 [2024-10-28 15:25:14.862841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.862856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.369 [2024-10-28 15:25:14.862870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.862885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.369 [2024-10-28 15:25:14.862898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.862913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.369 [2024-10-28 15:25:14.862927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.862942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.369 [2024-10-28 15:25:14.862955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.862972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.369 [2024-10-28 15:25:14.862985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.863017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.369 [2024-10-28 15:25:14.863034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114232 len:8 PRP1 0x0 PRP2 0x0 00:29:38.369 [2024-10-28 15:25:14.863047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.863068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.369 [2024-10-28 15:25:14.863080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.369 [2024-10-28 15:25:14.863092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114240 len:8 PRP1 0x0 PRP2 0x0 00:29:38.369 [2024-10-28 15:25:14.863105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.863118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.369 [2024-10-28 15:25:14.863129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.369 [2024-10-28 15:25:14.863140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114248 len:8 PRP1 0x0 PRP2 0x0 00:29:38.369 [2024-10-28 15:25:14.863157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.863170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.369 [2024-10-28 15:25:14.863181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.369 [2024-10-28 15:25:14.863192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114256 len:8 PRP1 0x0 PRP2 0x0 00:29:38.369 [2024-10-28 15:25:14.863205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.863218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.369 [2024-10-28 15:25:14.863228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.369 [2024-10-28 15:25:14.863239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114264 len:8 PRP1 0x0 PRP2 0x0 00:29:38.369 [2024-10-28 15:25:14.863252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.863264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.369 [2024-10-28 15:25:14.863275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.369 [2024-10-28 15:25:14.863286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114272 len:8 PRP1 0x0 PRP2 0x0 00:29:38.369 [2024-10-28 15:25:14.863298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.863311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.369 [2024-10-28 15:25:14.863322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.369 [2024-10-28 15:25:14.863333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114280 len:8 PRP1 0x0 PRP2 0x0 00:29:38.369 [2024-10-28 15:25:14.863345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.863358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.369 [2024-10-28 15:25:14.863369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.369 [2024-10-28 15:25:14.863380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114288 len:8 PRP1 0x0 PRP2 0x0 00:29:38.369 [2024-10-28 15:25:14.863393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.863406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.369 [2024-10-28 15:25:14.863417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.369 [2024-10-28 15:25:14.863427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114296 len:8 PRP1 0x0 PRP2 0x0 00:29:38.369 [2024-10-28 15:25:14.863440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.863453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.369 [2024-10-28 15:25:14.863464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.369 [2024-10-28 15:25:14.863475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114304 len:8 PRP1 0x0 PRP2 0x0 00:29:38.369 [2024-10-28 15:25:14.863487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.863500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.369 [2024-10-28 15:25:14.863511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.369 [2024-10-28 15:25:14.863525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114312 len:8 PRP1 0x0 PRP2 0x0 00:29:38.369 [2024-10-28 15:25:14.863538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.863551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.369 [2024-10-28 15:25:14.863561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.369 [2024-10-28 15:25:14.863572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114320 len:8 PRP1 0x0 PRP2 0x0 00:29:38.369 [2024-10-28 15:25:14.863585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.369 [2024-10-28 15:25:14.863597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.370 [2024-10-28 15:25:14.863608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.370 [2024-10-28 15:25:14.863619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114328 len:8 PRP1 0x0 PRP2 0x0 00:29:38.370 [2024-10-28 15:25:14.863631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.370 [2024-10-28 15:25:14.863644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.370 [2024-10-28 15:25:14.863662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.370 [2024-10-28 15:25:14.863674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114336 len:8 PRP1 0x0 PRP2 0x0 00:29:38.370 [2024-10-28 15:25:14.863686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.370 [2024-10-28 15:25:14.863700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.370 [2024-10-28 15:25:14.863710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.370 [2024-10-28 15:25:14.863721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114344 len:8 PRP1 0x0 PRP2 0x0 00:29:38.370 [2024-10-28 15:25:14.863734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.370 [2024-10-28 15:25:14.863747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.370 [2024-10-28 15:25:14.863758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.370 [2024-10-28 15:25:14.863769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114352 len:8 PRP1 0x0 PRP2 0x0 00:29:38.370 [2024-10-28 15:25:14.863781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.370 [2024-10-28 15:25:14.863794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.370 [2024-10-28 15:25:14.863805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.370 [2024-10-28 15:25:14.863816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114360 len:8 PRP1 0x0 PRP2 0x0 00:29:38.370 [2024-10-28 15:25:14.863828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.370 [2024-10-28 15:25:14.863841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.370 [2024-10-28 15:25:14.863852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.370 [2024-10-28 15:25:14.863863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114368 len:8 PRP1 0x0 PRP2 0x0 00:29:38.370 [2024-10-28 15:25:14.863875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.370 [2024-10-28 15:25:14.863888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.370 [2024-10-28 15:25:14.863903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.370 [2024-10-28 15:25:14.863914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114376 len:8 PRP1 0x0 PRP2 0x0 00:29:38.370 [2024-10-28 15:25:14.863927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.370 [2024-10-28 15:25:14.863940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.370 [2024-10-28 15:25:14.863951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.370 [2024-10-28 15:25:14.863962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114384 len:8 PRP1 0x0 PRP2 0x0 00:29:38.370 [2024-10-28 15:25:14.863974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.370 [2024-10-28 15:25:14.863987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.370 [2024-10-28 15:25:14.863998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.370 [2024-10-28 15:25:14.864008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114392 len:8 PRP1 0x0 PRP2 0x0 00:29:38.370 [2024-10-28 15:25:14.864021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.370 [2024-10-28 15:25:14.864034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.370 [2024-10-28 15:25:14.864044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.370 [2024-10-28 15:25:14.864055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114400 len:8 PRP1 0x0 PRP2 0x0 00:29:38.370 [2024-10-28 15:25:14.864067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.370 [2024-10-28 15:25:14.864081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.370 [2024-10-28 15:25:14.864092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.370 [2024-10-28 15:25:14.864104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114408 len:8 PRP1 0x0 PRP2 0x0 00:29:38.370 [2024-10-28 15:25:14.864116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.370 [2024-10-28 15:25:14.864129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.370 [2024-10-28 15:25:14.864140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.370 [2024-10-28 15:25:14.864151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114416 len:8 PRP1 0x0 PRP2 0x0 00:29:38.370 [2024-10-28 15:25:14.864163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.370 [2024-10-28 15:25:14.864176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.370 [2024-10-28 15:25:14.864186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.370 [2024-10-28 15:25:14.864197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114424 len:8 PRP1 0x0 PRP2 0x0 00:29:38.370 [2024-10-28 15:25:14.864210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.370 [2024-10-28 15:25:14.864223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.370 [2024-10-28 15:25:14.864233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.370 [2024-10-28 15:25:14.864245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114432 len:8 PRP1 0x0 PRP2 0x0 00:29:38.370 [2024-10-28 15:25:14.864257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.370 [2024-10-28 15:25:14.864273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.370 [2024-10-28 15:25:14.864285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.370 [2024-10-28 15:25:14.864296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114440 len:8 PRP1 0x0 PRP2 0x0 00:29:38.370 [2024-10-28 15:25:14.864309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.370 [2024-10-28 15:25:14.864322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.370 [2024-10-28 15:25:14.864332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.370 [2024-10-28 15:25:14.864343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114448 len:8 PRP1 0x0 PRP2 0x0 00:29:38.370 [2024-10-28 15:25:14.864356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.370 [2024-10-28 15:25:14.864368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.370 [2024-10-28 15:25:14.864379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.370 [2024-10-28 15:25:14.864390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114456 len:8 PRP1 0x0 PRP2 0x0 00:29:38.370 [2024-10-28 15:25:14.864403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.370 [2024-10-28 15:25:14.864415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.370 [2024-10-28 15:25:14.864426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.370 [2024-10-28 15:25:14.864437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114464 len:8 PRP1 0x0 PRP2 0x0 00:29:38.370 [2024-10-28 15:25:14.864449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.370 [2024-10-28 15:25:14.864462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.370 [2024-10-28 15:25:14.864473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.370 [2024-10-28 15:25:14.864484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114472 len:8 PRP1 0x0 PRP2 0x0 00:29:38.370 [2024-10-28 15:25:14.864496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.370 [2024-10-28 15:25:14.864509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.370 [2024-10-28 15:25:14.864520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.370 [2024-10-28 15:25:14.864531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114480 len:8 PRP1 0x0 PRP2 0x0 00:29:38.370 [2024-10-28 15:25:14.864543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.370 [2024-10-28 15:25:14.864556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.370 [2024-10-28 15:25:14.864566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.370 [2024-10-28 15:25:14.864577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114488 len:8 PRP1 0x0 PRP2 0x0 00:29:38.370 [2024-10-28 15:25:14.864590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.370 [2024-10-28 15:25:14.864603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.370 [2024-10-28 15:25:14.864613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.370 [2024-10-28 15:25:14.864630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114496 len:8 PRP1 0x0 PRP2 0x0 00:29:38.370 [2024-10-28 15:25:14.864647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.370 [2024-10-28 15:25:14.864671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.370 [2024-10-28 15:25:14.864683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.370 [2024-10-28 15:25:14.864694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114504 len:8 PRP1 0x0 PRP2 0x0 00:29:38.370 [2024-10-28 15:25:14.864706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.370 [2024-10-28 15:25:14.864720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.370 [2024-10-28 15:25:14.864730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.370 [2024-10-28 15:25:14.864741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114512 len:8 PRP1 0x0 PRP2 0x0 00:29:38.370 [2024-10-28 15:25:14.864754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.370 [2024-10-28 15:25:14.864767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.370 [2024-10-28 15:25:14.864778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.370 [2024-10-28 15:25:14.864789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114520 len:8 PRP1 0x0 PRP2 0x0 00:29:38.370 [2024-10-28 15:25:14.864801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:14.864814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.371 [2024-10-28 15:25:14.864825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.371 [2024-10-28 15:25:14.864836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114528 len:8 PRP1 0x0 PRP2 0x0 00:29:38.371 [2024-10-28 15:25:14.864848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:14.864861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.371 [2024-10-28 15:25:14.864872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.371 [2024-10-28 15:25:14.864883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114536 len:8 PRP1 0x0 PRP2 0x0 00:29:38.371 [2024-10-28 15:25:14.864895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:14.864908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.371 [2024-10-28 15:25:14.864918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.371 [2024-10-28 15:25:14.864930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114544 len:8 PRP1 0x0 PRP2 0x0 00:29:38.371 [2024-10-28 15:25:14.864942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:14.864955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.371 [2024-10-28 15:25:14.864965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.371 [2024-10-28 15:25:14.864976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114552 len:8 PRP1 0x0 PRP2 0x0 00:29:38.371 [2024-10-28 15:25:14.864989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:14.865002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.371 [2024-10-28 15:25:14.865019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.371 [2024-10-28 15:25:14.865036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114560 len:8 PRP1 0x0 PRP2 0x0 00:29:38.371 [2024-10-28 15:25:14.865050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:14.865063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.371 [2024-10-28 15:25:14.865074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.371 [2024-10-28 15:25:14.865085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114568 len:8 PRP1 0x0 PRP2 0x0 00:29:38.371 [2024-10-28 15:25:14.865098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:14.865166] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:29:38.371 [2024-10-28 15:25:14.865208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.371 [2024-10-28 15:25:14.865226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:14.865241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.371 [2024-10-28 15:25:14.865255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:14.865269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.371 [2024-10-28 15:25:14.865283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:14.865297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.371 [2024-10-28 15:25:14.865310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:14.865323] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:29:38.371 [2024-10-28 15:25:14.865385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2398c00 (9): Bad file descriptor 00:29:38.371 [2024-10-28 15:25:14.868591] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:29:38.371 [2024-10-28 15:25:14.903311] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:29:38.371 8711.67 IOPS, 34.03 MiB/s [2024-10-28T14:25:25.238Z] 8735.43 IOPS, 34.12 MiB/s [2024-10-28T14:25:25.238Z] 8755.50 IOPS, 34.20 MiB/s [2024-10-28T14:25:25.238Z] 8766.89 IOPS, 34.25 MiB/s [2024-10-28T14:25:25.238Z] 8780.60 IOPS, 34.30 MiB/s [2024-10-28T14:25:25.238Z] [2024-10-28 15:25:19.725566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:71504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.371 [2024-10-28 15:25:19.725606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:19.725666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.371 [2024-10-28 15:25:19.725699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:19.725717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:71520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.371 [2024-10-28 15:25:19.725731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:19.725757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.371 [2024-10-28 15:25:19.725772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:19.725788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.371 [2024-10-28 15:25:19.725802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:19.725817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:71544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.371 [2024-10-28 15:25:19.725831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:19.725847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:71552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.371 [2024-10-28 15:25:19.725861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:19.725877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:71560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.371 [2024-10-28 15:25:19.725891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:19.725906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:71568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.371 [2024-10-28 15:25:19.725920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:19.725935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:71576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.371 [2024-10-28 15:25:19.725948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:19.725964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:71584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.371 [2024-10-28 15:25:19.725977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:19.726008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:71592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.371 [2024-10-28 15:25:19.726022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:19.726037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:71600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.371 [2024-10-28 15:25:19.726050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:19.726065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:71608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.371 [2024-10-28 15:25:19.726078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:19.726093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.371 [2024-10-28 15:25:19.726106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:19.726121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:71624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.371 [2024-10-28 15:25:19.726135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:19.726153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:71632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.371 [2024-10-28 15:25:19.726167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:19.726182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.371 [2024-10-28 15:25:19.726195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:19.726209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.371 [2024-10-28 15:25:19.726222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:19.726237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.371 [2024-10-28 15:25:19.726250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:19.726265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.371 [2024-10-28 15:25:19.726278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:19.726292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:71672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.371 [2024-10-28 15:25:19.726305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.371 [2024-10-28 15:25:19.726320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.371 [2024-10-28 15:25:19.726333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.726350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.372 [2024-10-28 15:25:19.726363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.726378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:71696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.372 [2024-10-28 15:25:19.726392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.726407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:71704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.372 [2024-10-28 15:25:19.726420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.726435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.372 [2024-10-28 15:25:19.726448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.726463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.372 [2024-10-28 15:25:19.726476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.726490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.372 [2024-10-28 15:25:19.726507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.726522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.372 [2024-10-28 15:25:19.726535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.726550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.372 [2024-10-28 15:25:19.726563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.726577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.372 [2024-10-28 15:25:19.726590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.726605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.372 [2024-10-28 15:25:19.726618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.726648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.372 [2024-10-28 15:25:19.726671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.726687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.372 [2024-10-28 15:25:19.726701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.726716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.372 [2024-10-28 15:25:19.726730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.726745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.372 [2024-10-28 15:25:19.726759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.726774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.372 [2024-10-28 15:25:19.726788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.726803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.372 [2024-10-28 15:25:19.726817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.726832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.372 [2024-10-28 15:25:19.726846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.726862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.372 [2024-10-28 15:25:19.726875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.726894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.372 [2024-10-28 15:25:19.726909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.726924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.372 [2024-10-28 15:25:19.726952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.726967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.372 [2024-10-28 15:25:19.726980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.726995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.372 [2024-10-28 15:25:19.727008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.727023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.372 [2024-10-28 15:25:19.727036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.727051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.372 [2024-10-28 15:25:19.727064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.727078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.372 [2024-10-28 15:25:19.727091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.727107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.372 [2024-10-28 15:25:19.727120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.727135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.372 [2024-10-28 15:25:19.727148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.727162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.372 [2024-10-28 15:25:19.727176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.727190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.372 [2024-10-28 15:25:19.727203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.727218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.372 [2024-10-28 15:25:19.727231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.727245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.372 [2024-10-28 15:25:19.727262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.727278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.372 [2024-10-28 15:25:19.727291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.727321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.372 [2024-10-28 15:25:19.727336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.727352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.372 [2024-10-28 15:25:19.727365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.372 [2024-10-28 15:25:19.727381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.372 [2024-10-28 15:25:19.727394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.727409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.727423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.727439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.727453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.727469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.727482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.727498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.727512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.727527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.727541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.727556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.727570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.727585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.727599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.727614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.727627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.727643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.727668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.727685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.727699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.727714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.727728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.727743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.727757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.727772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.727786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.727801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.727815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.727831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.727844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.727859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.727873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.727889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.727903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.727918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.727932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.727947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.727960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.727975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.727989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.728004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.728017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.728037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.728051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.728067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.728080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.728096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.728109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.728125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.728138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.728154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.728167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.728182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.728196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.728211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.728225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.728240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.728254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.728270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.728284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.728300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.728313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.728328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.728342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.728357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.728371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.728386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.728404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.728420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.728435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.728450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.728464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.728479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.728493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.728508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.728522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.728537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.728551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.728566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.728580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.728595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.373 [2024-10-28 15:25:19.728609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.373 [2024-10-28 15:25:19.728624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.374 [2024-10-28 15:25:19.728638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.374 [2024-10-28 15:25:19.728662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.374 [2024-10-28 15:25:19.728678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.374 [2024-10-28 15:25:19.728694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.374 [2024-10-28 15:25:19.728708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.374 [2024-10-28 15:25:19.728723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.374 [2024-10-28 15:25:19.728737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.374 [2024-10-28 15:25:19.728752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.374 [2024-10-28 15:25:19.728766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.374 [2024-10-28 15:25:19.728785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.374 [2024-10-28 15:25:19.728799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.374 [2024-10-28 15:25:19.728815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.374 [2024-10-28 15:25:19.728828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.374 [2024-10-28 15:25:19.728843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.374 [2024-10-28 15:25:19.728856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.374 [2024-10-28 15:25:19.728894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.374 [2024-10-28 15:25:19.728912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72408 len:8 PRP1 0x0 PRP2 0x0 00:29:38.374 [2024-10-28 15:25:19.728925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.374 [2024-10-28 15:25:19.728944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.374 [2024-10-28 15:25:19.728957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.374 [2024-10-28 15:25:19.728968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72416 len:8 PRP1 0x0 PRP2 0x0 00:29:38.374 [2024-10-28 15:25:19.728981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.374 [2024-10-28 15:25:19.728995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.374 [2024-10-28 15:25:19.729005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.374 [2024-10-28 15:25:19.729017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72424 len:8 PRP1 0x0 PRP2 0x0 00:29:38.374 [2024-10-28 15:25:19.729029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.374 [2024-10-28 15:25:19.729042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.374 [2024-10-28 15:25:19.729053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.374 [2024-10-28 15:25:19.729064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72432 len:8 PRP1 0x0 PRP2 0x0 00:29:38.374 [2024-10-28 15:25:19.729076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.374 [2024-10-28 15:25:19.729089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.374 [2024-10-28 15:25:19.729100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.374 [2024-10-28 15:25:19.729111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72440 len:8 PRP1 0x0 PRP2 0x0 00:29:38.374 [2024-10-28 15:25:19.729123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.374 [2024-10-28 15:25:19.729136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.374 [2024-10-28 15:25:19.729147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.374 [2024-10-28 15:25:19.729158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72448 len:8 PRP1 0x0 PRP2 0x0 00:29:38.374 [2024-10-28 15:25:19.729170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.374 [2024-10-28 15:25:19.729183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.374 [2024-10-28 15:25:19.729198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.374 [2024-10-28 15:25:19.729210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72456 len:8 PRP1 0x0 PRP2 0x0 00:29:38.374 [2024-10-28 15:25:19.729222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.374 [2024-10-28 15:25:19.729236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.374 [2024-10-28 15:25:19.729247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.374 [2024-10-28 15:25:19.729258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72464 len:8 PRP1 0x0 PRP2 0x0 00:29:38.374 [2024-10-28 15:25:19.729270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.374 [2024-10-28 15:25:19.729284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.374 [2024-10-28 15:25:19.729294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.374 [2024-10-28 15:25:19.729306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72472 len:8 PRP1 0x0 PRP2 0x0 00:29:38.374 [2024-10-28 15:25:19.729319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.374 [2024-10-28 15:25:19.729332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.374 [2024-10-28 15:25:19.729343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.374 [2024-10-28 15:25:19.729354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72480 len:8 PRP1 0x0 PRP2 0x0 00:29:38.374 [2024-10-28 15:25:19.729366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.374 [2024-10-28 15:25:19.729379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.374 [2024-10-28 15:25:19.729390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.374 [2024-10-28 15:25:19.729400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72488 len:8 PRP1 0x0 PRP2 0x0 00:29:38.374 [2024-10-28 15:25:19.729413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.374 [2024-10-28 15:25:19.729426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.374 [2024-10-28 15:25:19.729437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.374 [2024-10-28 15:25:19.729448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72496 len:8 PRP1 0x0 PRP2 0x0 00:29:38.374 [2024-10-28 15:25:19.729460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.374 [2024-10-28 15:25:19.729473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.374 [2024-10-28 15:25:19.729484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.374 [2024-10-28 15:25:19.729495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72504 len:8 PRP1 0x0 PRP2 0x0 00:29:38.374 [2024-10-28 15:25:19.729509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.374 [2024-10-28 15:25:19.729522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.374 [2024-10-28 15:25:19.729533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.374 [2024-10-28 15:25:19.729544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72512 len:8 PRP1 0x0 PRP2 0x0 00:29:38.374 [2024-10-28 15:25:19.729557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.374 [2024-10-28 15:25:19.729574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.374 [2024-10-28 15:25:19.729586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.374 [2024-10-28 15:25:19.729597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72520 len:8 PRP1 0x0 PRP2 0x0 00:29:38.374 [2024-10-28 15:25:19.729610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.374 [2024-10-28 15:25:19.729623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.374 [2024-10-28 15:25:19.729634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.374 [2024-10-28 15:25:19.729645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71720 len:8 PRP1 0x0 PRP2 0x0 00:29:38.374 [2024-10-28 15:25:19.729667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.374 [2024-10-28 15:25:19.729681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.374 [2024-10-28 15:25:19.729692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.374 [2024-10-28 15:25:19.729703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71728 len:8 PRP1 0x0 PRP2 0x0 00:29:38.374 [2024-10-28 15:25:19.729716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.374 [2024-10-28 15:25:19.729729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.374 [2024-10-28 15:25:19.729739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.374 [2024-10-28 15:25:19.729750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71736 len:8 PRP1 0x0 PRP2 0x0 00:29:38.374 [2024-10-28 15:25:19.729763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.374 [2024-10-28 15:25:19.729776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.374 [2024-10-28 15:25:19.729786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.374 [2024-10-28 15:25:19.729798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71744 len:8 PRP1 0x0 PRP2 0x0 00:29:38.374 [2024-10-28 15:25:19.729810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.374 [2024-10-28 15:25:19.729823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.374 [2024-10-28 15:25:19.729834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.375 [2024-10-28 15:25:19.729845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71752 len:8 PRP1 0x0 PRP2 0x0 00:29:38.375 [2024-10-28 15:25:19.729857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.375 [2024-10-28 15:25:19.729870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:38.375 [2024-10-28 15:25:19.729881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:38.375 [2024-10-28 15:25:19.729892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71760 len:8 PRP1 0x0 PRP2 0x0 00:29:38.375 [2024-10-28 15:25:19.729904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.375 [2024-10-28 15:25:19.729967] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:29:38.375 [2024-10-28 15:25:19.730005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.375 [2024-10-28 15:25:19.730027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.375 [2024-10-28 15:25:19.730043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.375 [2024-10-28 15:25:19.730065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.375 [2024-10-28 15:25:19.730086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.375 [2024-10-28 15:25:19.730099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.375 [2024-10-28 15:25:19.730113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.375 [2024-10-28 15:25:19.730126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.375 [2024-10-28 15:25:19.730140] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:29:38.375 [2024-10-28 15:25:19.730177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2398c00 (9): Bad file descriptor 00:29:38.375 [2024-10-28 15:25:19.733405] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:29:38.375 [2024-10-28 15:25:19.890072] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:29:38.375 8643.64 IOPS, 33.76 MiB/s [2024-10-28T14:25:25.242Z] 8653.83 IOPS, 33.80 MiB/s [2024-10-28T14:25:25.242Z] 8674.38 IOPS, 33.88 MiB/s [2024-10-28T14:25:25.242Z] 8683.21 IOPS, 33.92 MiB/s 00:29:38.375 Latency(us) 00:29:38.375 [2024-10-28T14:25:25.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.375 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:38.375 Verification LBA range: start 0x0 length 0x4000 00:29:38.375 NVMe0n1 : 15.01 8690.88 33.95 590.37 0.00 13765.30 558.27 15631.55 00:29:38.375 [2024-10-28T14:25:25.242Z] =================================================================================================================== 00:29:38.375 [2024-10-28T14:25:25.242Z] Total : 8690.88 33.95 590.37 0.00 13765.30 558.27 15631.55 00:29:38.375 Received shutdown signal, test time was about 15.000000 seconds 00:29:38.375 00:29:38.375 Latency(us) 00:29:38.375 [2024-10-28T14:25:25.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.375 [2024-10-28T14:25:25.242Z] =================================================================================================================== 00:29:38.375 [2024-10-28T14:25:25.242Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:38.375 15:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:29:38.375 15:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:29:38.375 15:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:29:38.375 15:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3271185 00:29:38.375 15:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:29:38.375 15:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3271185 /var/tmp/bdevperf.sock 00:29:38.375 15:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3271185 ']' 00:29:38.375 15:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:38.375 15:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:38.375 15:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:38.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:38.375 15:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:38.375 15:25:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:38.633 15:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:38.633 15:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:38.633 15:25:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:39.198 [2024-10-28 15:25:26.007313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:39.198 15:25:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:39.764 [2024-10-28 15:25:26.328226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:39.764 15:25:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:40.021 NVMe0n1 00:29:40.021 15:25:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:40.586 00:29:40.587 15:25:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:40.845 00:29:40.845 15:25:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:40.845 15:25:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:29:41.103 15:25:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:41.667 15:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:29:45.034 15:25:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:45.034 15:25:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:29:45.034 15:25:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3271914 00:29:45.035 15:25:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:45.035 15:25:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3271914 00:29:45.969 { 00:29:45.969 "results": [ 00:29:45.969 { 00:29:45.969 "job": "NVMe0n1", 00:29:45.969 "core_mask": "0x1", 00:29:45.969 "workload": "verify", 00:29:45.969 "status": "finished", 00:29:45.969 "verify_range": { 00:29:45.969 "start": 0, 00:29:45.969 "length": 16384 00:29:45.969 }, 00:29:45.969 "queue_depth": 128, 00:29:45.969 "io_size": 4096, 00:29:45.969 "runtime": 1.013125, 00:29:45.969 "iops": 8659.34608266502, 00:29:45.969 "mibps": 33.82557063541024, 00:29:45.969 "io_failed": 0, 00:29:45.969 "io_timeout": 0, 00:29:45.969 "avg_latency_us": 14720.90409649134, 00:29:45.970 "min_latency_us": 3228.254814814815, 00:29:45.970 "max_latency_us": 15049.007407407407 00:29:45.970 } 00:29:45.970 ], 00:29:45.970 "core_count": 1 00:29:45.970 } 00:29:45.970 15:25:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:45.970 [2024-10-28 15:25:25.011455] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:29:45.970 [2024-10-28 15:25:25.011559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3271185 ] 00:29:45.970 [2024-10-28 15:25:25.086766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.970 [2024-10-28 15:25:25.144457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.970 [2024-10-28 15:25:28.226720] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:45.970 [2024-10-28 15:25:28.226798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.970 [2024-10-28 15:25:28.226823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.970 [2024-10-28 15:25:28.226852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.970 [2024-10-28 15:25:28.226879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.970 [2024-10-28 15:25:28.226902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.970 [2024-10-28 15:25:28.226916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.970 [2024-10-28 15:25:28.226930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.970 [2024-10-28 15:25:28.226944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.970 [2024-10-28 15:25:28.226958] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:29:45.970 [2024-10-28 15:25:28.227007] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:29:45.970 [2024-10-28 15:25:28.227042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa1ac00 (9): Bad file descriptor 00:29:45.970 [2024-10-28 15:25:28.238200] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:29:45.970 Running I/O for 1 seconds... 00:29:45.970 8645.00 IOPS, 33.77 MiB/s 00:29:45.970 Latency(us) 00:29:45.970 [2024-10-28T14:25:32.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:45.970 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:45.970 Verification LBA range: start 0x0 length 0x4000 00:29:45.970 NVMe0n1 : 1.01 8659.35 33.83 0.00 0.00 14720.90 3228.25 15049.01 00:29:45.970 [2024-10-28T14:25:32.837Z] =================================================================================================================== 00:29:45.970 [2024-10-28T14:25:32.837Z] Total : 8659.35 33.83 0.00 0.00 14720.90 3228.25 15049.01 00:29:45.970 15:25:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:45.970 15:25:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:29:46.535 15:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:47.102 15:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:47.102 15:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:29:47.668 15:25:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:47.926 15:25:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:29:51.208 15:25:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:51.208 15:25:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:29:51.466 15:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3271185 00:29:51.466 15:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3271185 ']' 00:29:51.466 15:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3271185 00:29:51.466 15:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:29:51.466 15:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:51.466 15:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3271185 00:29:51.466 15:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:51.466 15:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:51.466 15:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3271185' 00:29:51.466 killing process with pid 3271185 00:29:51.466 15:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3271185 00:29:51.466 15:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3271185 00:29:51.724 15:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:29:51.724 15:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:52.289 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:29:52.289 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:52.289 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:29:52.289 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:52.289 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:29:52.289 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:52.289 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:29:52.289 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:52.289 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:52.289 rmmod nvme_tcp 00:29:52.289 rmmod nvme_fabrics 00:29:52.546 rmmod nvme_keyring 00:29:52.546 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:52.546 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:29:52.546 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:29:52.546 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3268767 ']' 00:29:52.546 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3268767 00:29:52.546 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3268767 ']' 00:29:52.547 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3268767 00:29:52.547 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:29:52.547 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:52.547 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3268767 00:29:52.547 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:52.547 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:52.547 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3268767' 00:29:52.547 killing process with pid 3268767 00:29:52.547 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3268767 00:29:52.547 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3268767 00:29:52.805 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:52.805 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:52.805 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:52.805 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:29:52.805 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:29:52.805 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:52.805 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:29:52.805 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:52.805 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:52.805 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.805 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.805 15:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.348 15:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:55.348 00:29:55.348 real 0m40.943s 00:29:55.348 user 2m25.066s 00:29:55.348 sys 0m7.684s 00:29:55.348 15:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:55.348 15:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:55.348 ************************************ 00:29:55.348 END TEST nvmf_failover 00:29:55.348 ************************************ 00:29:55.348 15:25:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:55.348 15:25:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:55.348 15:25:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:55.348 15:25:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.348 ************************************ 00:29:55.348 START TEST nvmf_host_discovery 00:29:55.348 ************************************ 00:29:55.348 15:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:55.348 * Looking for test storage... 00:29:55.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:55.348 15:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:29:55.348 15:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1689 -- # lcov --version 00:29:55.348 15:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:29:55.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.348 --rc genhtml_branch_coverage=1 00:29:55.348 --rc genhtml_function_coverage=1 00:29:55.348 --rc genhtml_legend=1 00:29:55.348 --rc geninfo_all_blocks=1 00:29:55.348 --rc geninfo_unexecuted_blocks=1 00:29:55.348 00:29:55.348 ' 00:29:55.348 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:29:55.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.348 --rc genhtml_branch_coverage=1 00:29:55.348 --rc genhtml_function_coverage=1 00:29:55.348 --rc genhtml_legend=1 00:29:55.348 --rc geninfo_all_blocks=1 00:29:55.348 --rc geninfo_unexecuted_blocks=1 00:29:55.349 00:29:55.349 ' 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:29:55.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.349 --rc genhtml_branch_coverage=1 00:29:55.349 --rc genhtml_function_coverage=1 00:29:55.349 --rc genhtml_legend=1 00:29:55.349 --rc geninfo_all_blocks=1 00:29:55.349 --rc geninfo_unexecuted_blocks=1 00:29:55.349 00:29:55.349 ' 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:29:55.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.349 --rc genhtml_branch_coverage=1 00:29:55.349 --rc genhtml_function_coverage=1 00:29:55.349 --rc genhtml_legend=1 00:29:55.349 --rc geninfo_all_blocks=1 00:29:55.349 --rc geninfo_unexecuted_blocks=1 00:29:55.349 00:29:55.349 ' 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:55.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:29:55.349 15:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:58.650 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:58.650 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:58.650 Found net devices under 0000:84:00.0: cvl_0_0 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:58.650 Found net devices under 0000:84:00.1: cvl_0_1 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:58.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:58.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:29:58.650 00:29:58.650 --- 10.0.0.2 ping statistics --- 00:29:58.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.650 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:58.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:58.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:29:58.650 00:29:58.650 --- 10.0.0.1 ping statistics --- 00:29:58.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.650 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:58.650 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:58.651 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:58.651 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:58.651 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:58.651 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:58.651 15:25:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:58.651 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:29:58.651 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:58.651 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:58.651 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:58.651 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3274860 00:29:58.651 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3274860 00:29:58.651 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3274860 ']' 00:29:58.651 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:58.651 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:58.651 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:58.651 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:58.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:58.651 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:58.651 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:58.651 [2024-10-28 15:25:45.114044] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:29:58.651 [2024-10-28 15:25:45.114212] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:58.651 [2024-10-28 15:25:45.289468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.651 [2024-10-28 15:25:45.407791] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:58.651 [2024-10-28 15:25:45.407913] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:58.651 [2024-10-28 15:25:45.407950] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:58.651 [2024-10-28 15:25:45.407988] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:58.651 [2024-10-28 15:25:45.408003] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:58.651 [2024-10-28 15:25:45.409157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:58.909 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:58.909 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:29:58.909 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:58.909 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:58.909 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:58.909 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:58.909 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:58.909 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.909 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:58.909 [2024-10-28 15:25:45.716214] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.909 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.909 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:29:58.909 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.909 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:58.909 [2024-10-28 15:25:45.728520] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:58.909 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.909 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:29:58.909 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.909 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:58.909 null0 00:29:58.909 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.909 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:29:58.909 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.909 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:58.909 null1 00:29:58.909 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.909 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:29:58.910 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.910 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:58.910 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.910 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3275008 00:29:58.910 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:29:58.910 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3275008 /tmp/host.sock 00:29:58.910 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3275008 ']' 00:29:58.910 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:29:58.910 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:58.910 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:58.910 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:58.910 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:58.910 15:25:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:59.168 [2024-10-28 15:25:45.868647] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:29:59.168 [2024-10-28 15:25:45.868834] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3275008 ] 00:29:59.168 [2024-10-28 15:25:45.976884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.429 [2024-10-28 15:25:46.045114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.429 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:59.429 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:29:59.430 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:59.430 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:29:59.430 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.430 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:59.430 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.430 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:29:59.430 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.430 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:59.430 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.430 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:29:59.430 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:29:59.430 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:59.430 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:59.430 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.430 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:59.430 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:59.430 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:59.430 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:29:59.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:29:59.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:59.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:59.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:59.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:59.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:59.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:29:59.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:29:59.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:59.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:29:59.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:59.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:59.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:59.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:59.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:59.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:29:59.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:29:59.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:59.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:59.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:59.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:59.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:59.690 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:59.950 [2024-10-28 15:25:46.684304] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:59.950 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:00.212 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.212 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:00.212 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:30:00.212 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:00.212 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:00.212 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:30:00.212 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.212 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:00.212 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.212 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:00.212 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:00.212 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:00.212 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:00.212 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:00.212 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:00.212 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:00.212 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:00.212 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.212 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:00.212 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:00.212 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:00.212 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.212 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:30:00.212 15:25:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:30:00.473 [2024-10-28 15:25:47.246440] bdev_nvme.c:7291:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:00.473 [2024-10-28 15:25:47.246499] bdev_nvme.c:7377:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:00.473 [2024-10-28 15:25:47.246554] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:00.473 [2024-10-28 15:25:47.333963] bdev_nvme.c:7220:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:00.734 [2024-10-28 15:25:47.557088] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:30:00.734 [2024-10-28 15:25:47.559142] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x12b15b0:1 started. 00:30:00.734 [2024-10-28 15:25:47.563296] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:00.734 [2024-10-28 15:25:47.563349] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:00.734 [2024-10-28 15:25:47.565946] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x12b15b0 was disconnected and freed. delete nvme_qpair. 00:30:01.304 15:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:01.304 15:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:01.304 15:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:01.305 15:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:01.305 15:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.305 15:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.305 15:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:01.305 15:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:01.305 15:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:01.305 15:25:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.305 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:01.566 [2024-10-28 15:25:48.242926] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x12b17c0:1 started. 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:01.566 [2024-10-28 15:25:48.248167] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x12b17c0 was disconnected and freed. delete nvme_qpair. 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.566 [2024-10-28 15:25:48.382425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:01.566 [2024-10-28 15:25:48.382897] bdev_nvme.c:7273:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:01.566 [2024-10-28 15:25:48.382938] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:01.566 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:01.828 [2024-10-28 15:25:48.468906] bdev_nvme.c:7215:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.828 [2024-10-28 15:25:48.530063] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:30:01.828 [2024-10-28 15:25:48.530192] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:01.828 [2024-10-28 15:25:48.530232] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:01.828 [2024-10-28 15:25:48.530254] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:30:01.828 15:25:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:30:02.766 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:02.766 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:02.766 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:30:02.766 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:02.766 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:02.766 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.766 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:02.766 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:02.766 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:02.766 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.766 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:02.766 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:02.766 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:30:02.766 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:02.766 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:02.766 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:02.766 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:02.766 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:02.766 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:02.766 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:02.766 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:02.766 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:02.766 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.766 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:02.766 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.024 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:03.024 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:03.024 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:03.024 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:03.024 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:03.024 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.024 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.024 [2024-10-28 15:25:49.666906] bdev_nvme.c:7273:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:03.024 [2024-10-28 15:25:49.666951] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:03.024 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.024 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:03.024 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:03.024 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:03.024 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:03.024 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:03.024 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:03.024 [2024-10-28 15:25:49.673226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:03.024 [2024-10-28 15:25:49.673274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.024 [2024-10-28 15:25:49.673292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:03.024 [2024-10-28 15:25:49.673306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.024 [2024-10-28 15:25:49.673319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:03.024 [2024-10-28 15:25:49.673332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.024 [2024-10-28 15:25:49.673345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:03.024 [2024-10-28 15:25:49.673358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.024 [2024-10-28 15:25:49.673371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1281c10 is same with the state(6) to be set 00:30:03.024 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:03.024 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:03.024 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.024 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:03.024 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.024 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:03.024 [2024-10-28 15:25:49.683231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1281c10 (9): Bad file descriptor 00:30:03.024 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.024 [2024-10-28 15:25:49.693270] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:03.024 [2024-10-28 15:25:49.693293] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:03.024 [2024-10-28 15:25:49.693303] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:03.024 [2024-10-28 15:25:49.693316] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:03.024 [2024-10-28 15:25:49.693378] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:03.024 [2024-10-28 15:25:49.693617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.024 [2024-10-28 15:25:49.693670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1281c10 with addr=10.0.0.2, port=4420 00:30:03.024 [2024-10-28 15:25:49.693688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1281c10 is same with the state(6) to be set 00:30:03.024 [2024-10-28 15:25:49.693711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1281c10 (9): Bad file descriptor 00:30:03.024 [2024-10-28 15:25:49.693733] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:03.024 [2024-10-28 15:25:49.693747] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:03.024 [2024-10-28 15:25:49.693764] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:03.024 [2024-10-28 15:25:49.693777] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:03.024 [2024-10-28 15:25:49.693786] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:03.024 [2024-10-28 15:25:49.693809] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:03.024 [2024-10-28 15:25:49.703409] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:03.024 [2024-10-28 15:25:49.703429] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:03.024 [2024-10-28 15:25:49.703438] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:03.024 [2024-10-28 15:25:49.703445] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:03.024 [2024-10-28 15:25:49.703484] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:03.024 [2024-10-28 15:25:49.703671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.024 [2024-10-28 15:25:49.703699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1281c10 with addr=10.0.0.2, port=4420 00:30:03.024 [2024-10-28 15:25:49.703715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1281c10 is same with the state(6) to be set 00:30:03.024 [2024-10-28 15:25:49.703736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1281c10 (9): Bad file descriptor 00:30:03.024 [2024-10-28 15:25:49.703756] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:03.024 [2024-10-28 15:25:49.703770] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:03.024 [2024-10-28 15:25:49.703783] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:03.024 [2024-10-28 15:25:49.703795] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:03.024 [2024-10-28 15:25:49.703804] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:03.024 [2024-10-28 15:25:49.703827] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:03.024 [2024-10-28 15:25:49.713517] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:03.024 [2024-10-28 15:25:49.713539] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:03.024 [2024-10-28 15:25:49.713557] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:03.024 [2024-10-28 15:25:49.713565] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:03.024 [2024-10-28 15:25:49.713603] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:03.024 [2024-10-28 15:25:49.713789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.024 [2024-10-28 15:25:49.713817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1281c10 with addr=10.0.0.2, port=4420 00:30:03.024 [2024-10-28 15:25:49.713833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1281c10 is same with the state(6) to be set 00:30:03.024 [2024-10-28 15:25:49.713856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1281c10 (9): Bad file descriptor 00:30:03.024 [2024-10-28 15:25:49.713876] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:03.024 [2024-10-28 15:25:49.713890] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:03.024 [2024-10-28 15:25:49.713903] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:03.024 [2024-10-28 15:25:49.713915] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:03.024 [2024-10-28 15:25:49.713924] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:03.024 [2024-10-28 15:25:49.713939] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:03.024 [2024-10-28 15:25:49.723657] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:03.024 [2024-10-28 15:25:49.723678] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:03.024 [2024-10-28 15:25:49.723687] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:03.024 [2024-10-28 15:25:49.723694] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:03.024 [2024-10-28 15:25:49.723732] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:03.024 [2024-10-28 15:25:49.723891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.024 [2024-10-28 15:25:49.723918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1281c10 with addr=10.0.0.2, port=4420 00:30:03.024 [2024-10-28 15:25:49.723947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1281c10 is same with the state(6) to be set 00:30:03.024 [2024-10-28 15:25:49.723968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1281c10 (9): Bad file descriptor 00:30:03.024 [2024-10-28 15:25:49.723987] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:03.025 [2024-10-28 15:25:49.723999] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:03.025 [2024-10-28 15:25:49.724012] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:03.025 [2024-10-28 15:25:49.724023] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:03.025 [2024-10-28 15:25:49.724031] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:03.025 [2024-10-28 15:25:49.724045] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:03.025 [2024-10-28 15:25:49.733766] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:03.025 [2024-10-28 15:25:49.733791] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:03.025 [2024-10-28 15:25:49.733801] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:03.025 [2024-10-28 15:25:49.733809] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:03.025 [2024-10-28 15:25:49.733848] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:03.025 [2024-10-28 15:25:49.734053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.025 [2024-10-28 15:25:49.734079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1281c10 with addr=10.0.0.2, port=4420 00:30:03.025 [2024-10-28 15:25:49.734093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1281c10 is same with the state(6) to be set 00:30:03.025 [2024-10-28 15:25:49.734114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1281c10 (9): Bad file descriptor 00:30:03.025 [2024-10-28 15:25:49.734146] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:03.025 [2024-10-28 15:25:49.734162] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:03.025 [2024-10-28 15:25:49.734175] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:03.025 [2024-10-28 15:25:49.734186] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:03.025 [2024-10-28 15:25:49.734195] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:03.025 [2024-10-28 15:25:49.734209] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:03.025 [2024-10-28 15:25:49.743883] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:03.025 [2024-10-28 15:25:49.743906] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:03.025 [2024-10-28 15:25:49.743915] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:03.025 [2024-10-28 15:25:49.743923] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:03.025 [2024-10-28 15:25:49.743968] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:03.025 [2024-10-28 15:25:49.744169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.025 [2024-10-28 15:25:49.744194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1281c10 with addr=10.0.0.2, port=4420 00:30:03.025 [2024-10-28 15:25:49.744209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1281c10 is same with the state(6) to be set 00:30:03.025 [2024-10-28 15:25:49.744230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1281c10 (9): Bad file descriptor 00:30:03.025 [2024-10-28 15:25:49.744260] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:03.025 [2024-10-28 15:25:49.744276] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:03.025 [2024-10-28 15:25:49.744288] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:03.025 [2024-10-28 15:25:49.744299] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:03.025 [2024-10-28 15:25:49.744308] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:03.025 [2024-10-28 15:25:49.744322] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:03.025 [2024-10-28 15:25:49.753405] bdev_nvme.c:7078:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:30:03.025 [2024-10-28 15:25:49.753435] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.025 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.285 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:03.285 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:03.285 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:03.285 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:03.285 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:30:03.285 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.285 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.285 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.285 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:30:03.285 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:30:03.285 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:03.285 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:03.285 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:30:03.285 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:03.285 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:03.285 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.285 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:03.285 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.285 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:03.285 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:03.285 15:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.285 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:30:03.285 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:03.285 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:30:03.285 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:30:03.285 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:03.285 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:03.285 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:30:03.285 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:03.285 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:03.285 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.285 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:03.285 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.285 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:03.285 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:03.285 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.285 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:30:03.285 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:03.285 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:30:03.285 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:30:03.285 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:03.285 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:03.285 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:03.285 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:03.285 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:03.286 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:03.286 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:03.286 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:03.286 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.286 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.286 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.286 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:30:03.286 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:30:03.286 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:03.286 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:03.286 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:03.286 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.286 15:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:04.666 [2024-10-28 15:25:51.154838] bdev_nvme.c:7291:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:04.666 [2024-10-28 15:25:51.154875] bdev_nvme.c:7377:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:04.666 [2024-10-28 15:25:51.154903] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:04.666 [2024-10-28 15:25:51.241171] bdev_nvme.c:7220:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:30:04.666 [2024-10-28 15:25:51.340104] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:30:04.666 [2024-10-28 15:25:51.341031] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1295b50:1 started. 00:30:04.666 [2024-10-28 15:25:51.343490] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:04.666 [2024-10-28 15:25:51.343538] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:04.666 [2024-10-28 15:25:51.345263] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1295b50 was disconnected and freed. delete nvme_qpair. 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:04.666 request: 00:30:04.666 { 00:30:04.666 "name": "nvme", 00:30:04.666 "trtype": "tcp", 00:30:04.666 "traddr": "10.0.0.2", 00:30:04.666 "adrfam": "ipv4", 00:30:04.666 "trsvcid": "8009", 00:30:04.666 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:04.666 "wait_for_attach": true, 00:30:04.666 "method": "bdev_nvme_start_discovery", 00:30:04.666 "req_id": 1 00:30:04.666 } 00:30:04.666 Got JSON-RPC error response 00:30:04.666 response: 00:30:04.666 { 00:30:04.666 "code": -17, 00:30:04.666 "message": "File exists" 00:30:04.666 } 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.666 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:04.666 request: 00:30:04.667 { 00:30:04.667 "name": "nvme_second", 00:30:04.667 "trtype": "tcp", 00:30:04.667 "traddr": "10.0.0.2", 00:30:04.667 "adrfam": "ipv4", 00:30:04.667 "trsvcid": "8009", 00:30:04.667 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:04.667 "wait_for_attach": true, 00:30:04.667 "method": "bdev_nvme_start_discovery", 00:30:04.667 "req_id": 1 00:30:04.667 } 00:30:04.667 Got JSON-RPC error response 00:30:04.667 response: 00:30:04.667 { 00:30:04.667 "code": -17, 00:30:04.667 "message": "File exists" 00:30:04.667 } 00:30:04.667 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:04.667 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:04.667 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:04.667 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:04.667 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:04.667 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:30:04.667 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:04.667 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:04.667 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.667 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:04.667 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:04.667 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:04.667 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.926 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:30:04.926 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:30:04.926 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:04.926 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:04.926 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.926 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:04.926 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:04.926 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:04.926 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.926 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:04.926 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:04.926 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:04.926 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:04.926 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:04.926 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:04.926 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:04.926 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:04.926 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:04.926 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.926 15:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:05.865 [2024-10-28 15:25:52.635382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.865 [2024-10-28 15:25:52.635509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129b600 with addr=10.0.0.2, port=8010 00:30:05.865 [2024-10-28 15:25:52.635591] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:05.865 [2024-10-28 15:25:52.635613] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:05.865 [2024-10-28 15:25:52.635639] bdev_nvme.c:7359:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:06.802 [2024-10-28 15:25:53.637792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.802 [2024-10-28 15:25:53.637906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129b600 with addr=10.0.0.2, port=8010 00:30:06.802 [2024-10-28 15:25:53.637982] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:06.802 [2024-10-28 15:25:53.638004] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:06.802 [2024-10-28 15:25:53.638022] bdev_nvme.c:7359:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:08.183 [2024-10-28 15:25:54.639803] bdev_nvme.c:7334:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:30:08.183 request: 00:30:08.183 { 00:30:08.183 "name": "nvme_second", 00:30:08.183 "trtype": "tcp", 00:30:08.183 "traddr": "10.0.0.2", 00:30:08.183 "adrfam": "ipv4", 00:30:08.183 "trsvcid": "8010", 00:30:08.183 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:08.184 "wait_for_attach": false, 00:30:08.184 "attach_timeout_ms": 3000, 00:30:08.184 "method": "bdev_nvme_start_discovery", 00:30:08.184 "req_id": 1 00:30:08.184 } 00:30:08.184 Got JSON-RPC error response 00:30:08.184 response: 00:30:08.184 { 00:30:08.184 "code": -110, 00:30:08.184 "message": "Connection timed out" 00:30:08.184 } 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3275008 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:08.184 rmmod nvme_tcp 00:30:08.184 rmmod nvme_fabrics 00:30:08.184 rmmod nvme_keyring 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3274860 ']' 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3274860 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 3274860 ']' 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 3274860 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3274860 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3274860' 00:30:08.184 killing process with pid 3274860 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 3274860 00:30:08.184 15:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 3274860 00:30:08.445 15:25:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:08.445 15:25:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:08.445 15:25:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:08.445 15:25:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:30:08.445 15:25:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:30:08.445 15:25:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:08.445 15:25:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:30:08.445 15:25:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:08.445 15:25:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:08.445 15:25:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.445 15:25:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:08.445 15:25:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.986 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:10.986 00:30:10.986 real 0m15.568s 00:30:10.986 user 0m22.380s 00:30:10.986 sys 0m4.002s 00:30:10.986 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:10.986 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:10.986 ************************************ 00:30:10.986 END TEST nvmf_host_discovery 00:30:10.986 ************************************ 00:30:10.986 15:25:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:10.986 15:25:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:10.986 15:25:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:10.986 15:25:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.986 ************************************ 00:30:10.986 START TEST nvmf_host_multipath_status 00:30:10.986 ************************************ 00:30:10.986 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:10.986 * Looking for test storage... 00:30:10.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:10.986 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:30:10.986 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1689 -- # lcov --version 00:30:10.986 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:30:10.986 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:30:10.986 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:10.986 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:10.986 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:10.986 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:30:10.986 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:30:10.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.987 --rc genhtml_branch_coverage=1 00:30:10.987 --rc genhtml_function_coverage=1 00:30:10.987 --rc genhtml_legend=1 00:30:10.987 --rc geninfo_all_blocks=1 00:30:10.987 --rc geninfo_unexecuted_blocks=1 00:30:10.987 00:30:10.987 ' 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:30:10.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.987 --rc genhtml_branch_coverage=1 00:30:10.987 --rc genhtml_function_coverage=1 00:30:10.987 --rc genhtml_legend=1 00:30:10.987 --rc geninfo_all_blocks=1 00:30:10.987 --rc geninfo_unexecuted_blocks=1 00:30:10.987 00:30:10.987 ' 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:30:10.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.987 --rc genhtml_branch_coverage=1 00:30:10.987 --rc genhtml_function_coverage=1 00:30:10.987 --rc genhtml_legend=1 00:30:10.987 --rc geninfo_all_blocks=1 00:30:10.987 --rc geninfo_unexecuted_blocks=1 00:30:10.987 00:30:10.987 ' 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:30:10.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.987 --rc genhtml_branch_coverage=1 00:30:10.987 --rc genhtml_function_coverage=1 00:30:10.987 --rc genhtml_legend=1 00:30:10.987 --rc geninfo_all_blocks=1 00:30:10.987 --rc geninfo_unexecuted_blocks=1 00:30:10.987 00:30:10.987 ' 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:10.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:30:10.987 15:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:14.281 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:14.281 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:30:14.281 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:14.281 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:14.281 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:14.281 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:14.281 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:14.281 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:30:14.281 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:14.281 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:30:14.281 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:30:14.281 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:30:14.281 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:30:14.281 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:30:14.281 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:30:14.281 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:14.281 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:14.281 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:14.281 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:14.281 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:14.282 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:14.282 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:14.282 Found net devices under 0000:84:00.0: cvl_0_0 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:14.282 Found net devices under 0000:84:00.1: cvl_0_1 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:14.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:14.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:30:14.282 00:30:14.282 --- 10.0.0.2 ping statistics --- 00:30:14.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.282 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:14.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:14.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:30:14.282 00:30:14.282 --- 10.0.0.1 ping statistics --- 00:30:14.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.282 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3278196 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3278196 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3278196 ']' 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:14.282 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:14.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:14.283 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:14.283 15:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:14.283 [2024-10-28 15:26:00.682727] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:30:14.283 [2024-10-28 15:26:00.682829] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:14.283 [2024-10-28 15:26:00.819375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:14.283 [2024-10-28 15:26:00.931327] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:14.283 [2024-10-28 15:26:00.931427] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:14.283 [2024-10-28 15:26:00.931476] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:14.283 [2024-10-28 15:26:00.931493] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:14.283 [2024-10-28 15:26:00.931508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:14.283 [2024-10-28 15:26:00.933330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.283 [2024-10-28 15:26:00.933340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.665 15:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:15.665 15:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:30:15.665 15:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:15.665 15:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:15.665 15:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:15.665 15:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:15.665 15:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3278196 00:30:15.665 15:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:15.665 [2024-10-28 15:26:02.478190] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:15.665 15:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:16.605 Malloc0 00:30:16.605 15:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:30:16.865 15:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:17.435 15:26:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:18.004 [2024-10-28 15:26:04.663603] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:18.004 15:26:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:18.573 [2024-10-28 15:26:05.350063] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:18.574 15:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:30:18.574 15:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3278740 00:30:18.574 15:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:18.574 15:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3278740 /var/tmp/bdevperf.sock 00:30:18.574 15:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3278740 ']' 00:30:18.574 15:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:18.574 15:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:18.574 15:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:18.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:18.574 15:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:18.574 15:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:19.143 15:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:19.143 15:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:30:19.143 15:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:19.727 15:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:20.037 Nvme0n1 00:30:20.037 15:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:20.652 Nvme0n1 00:30:20.652 15:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:30:20.652 15:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:30:22.558 15:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:30:22.558 15:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:22.816 15:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:23.382 15:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:30:24.315 15:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:30:24.315 15:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:24.315 15:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:24.315 15:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:24.880 15:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:24.880 15:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:24.880 15:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:24.880 15:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:25.139 15:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:25.139 15:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:25.139 15:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:25.139 15:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:25.398 15:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:25.398 15:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:25.398 15:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:25.398 15:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:25.966 15:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:25.966 15:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:25.966 15:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:25.966 15:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:26.225 15:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:26.225 15:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:26.225 15:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:26.225 15:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:26.794 15:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:26.794 15:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:30:26.794 15:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:27.364 15:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:27.623 15:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:30:28.563 15:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:30:28.563 15:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:28.563 15:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:28.563 15:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:29.132 15:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:29.132 15:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:29.132 15:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:29.132 15:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:29.392 15:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:29.392 15:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:29.392 15:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:29.392 15:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:29.961 15:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:29.961 15:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:29.961 15:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:29.961 15:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:30.221 15:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:30.221 15:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:30.221 15:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.221 15:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:30.480 15:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:30.480 15:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:30.480 15:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.480 15:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:31.050 15:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.050 15:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:30:31.050 15:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:31.619 15:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:32.189 15:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:30:33.127 15:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:30:33.127 15:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:33.127 15:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.127 15:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:33.385 15:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:33.385 15:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:33.386 15:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.386 15:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:33.645 15:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:33.645 15:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:33.645 15:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.645 15:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:34.215 15:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.215 15:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:34.215 15:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.215 15:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:34.785 15:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.785 15:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:34.785 15:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.785 15:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:35.355 15:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:35.355 15:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:35.355 15:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:35.355 15:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:35.615 15:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:35.615 15:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:30:35.615 15:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:36.183 15:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:36.753 15:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:30:37.692 15:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:30:37.692 15:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:37.692 15:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:37.692 15:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.952 15:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.952 15:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:37.952 15:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.952 15:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:38.212 15:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:38.212 15:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:38.212 15:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:38.212 15:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:38.781 15:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:38.781 15:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:38.781 15:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:38.781 15:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:39.042 15:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:39.042 15:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:39.042 15:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.042 15:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:39.303 15:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:39.303 15:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:39.303 15:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.303 15:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:39.870 15:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:39.870 15:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:30:39.870 15:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:40.128 15:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:41.064 15:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:30:42.000 15:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:30:42.000 15:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:42.000 15:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:42.000 15:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:42.569 15:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:42.569 15:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:42.569 15:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:42.569 15:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:42.828 15:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:42.828 15:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:42.828 15:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:42.828 15:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:43.396 15:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:43.396 15:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:43.396 15:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.396 15:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:43.655 15:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:43.655 15:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:43.655 15:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.655 15:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:44.223 15:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:44.223 15:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:44.223 15:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:44.223 15:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:44.482 15:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:44.482 15:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:30:44.482 15:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:45.050 15:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:45.617 15:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:30:46.556 15:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:30:46.556 15:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:46.556 15:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:46.556 15:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:47.125 15:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:47.125 15:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:47.125 15:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:47.125 15:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:47.386 15:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:47.386 15:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:47.386 15:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:47.386 15:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:47.954 15:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:47.954 15:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:47.954 15:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:47.954 15:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:48.215 15:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:48.215 15:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:48.215 15:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:48.215 15:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:48.784 15:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:48.784 15:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:48.784 15:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:48.784 15:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:49.353 15:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:49.353 15:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:30:49.612 15:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:30:49.612 15:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:50.182 15:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:50.442 15:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:30:51.447 15:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:30:51.447 15:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:51.447 15:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:51.447 15:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:52.016 15:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:52.016 15:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:52.016 15:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:52.016 15:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:52.585 15:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:52.585 15:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:52.585 15:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:52.585 15:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:52.845 15:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:52.845 15:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:52.845 15:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:52.845 15:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:53.105 15:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.105 15:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:53.105 15:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.105 15:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:53.673 15:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.673 15:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:53.673 15:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.673 15:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:53.933 15:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.933 15:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:30:53.933 15:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:54.502 15:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:55.071 15:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:30:56.010 15:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:30:56.010 15:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:56.010 15:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.010 15:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:56.269 15:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:56.269 15:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:56.269 15:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.269 15:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:56.838 15:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:56.838 15:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:56.839 15:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.839 15:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:57.408 15:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:57.408 15:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:57.408 15:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:57.408 15:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:57.668 15:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:57.668 15:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:57.668 15:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:57.668 15:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:57.928 15:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:57.928 15:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:57.928 15:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:57.928 15:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:58.497 15:26:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:58.497 15:26:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:30:58.498 15:26:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:58.756 15:26:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:59.325 15:26:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:00.264 15:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:00.264 15:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:00.264 15:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:00.264 15:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:00.524 15:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:00.524 15:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:00.524 15:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:00.524 15:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:01.093 15:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:01.093 15:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:01.093 15:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:01.093 15:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:01.353 15:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:01.353 15:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:01.353 15:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:01.353 15:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:01.923 15:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:01.923 15:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:01.923 15:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:01.923 15:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:02.492 15:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:02.492 15:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:02.492 15:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.492 15:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:02.753 15:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:02.753 15:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:02.753 15:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:03.322 15:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:03.581 15:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:04.962 15:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:04.962 15:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:04.962 15:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:04.962 15:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:04.962 15:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:04.962 15:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:04.962 15:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:04.962 15:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:05.531 15:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:05.531 15:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:05.531 15:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.531 15:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:05.791 15:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:05.791 15:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:05.791 15:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.791 15:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:06.362 15:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:06.362 15:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:06.362 15:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:06.362 15:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:06.621 15:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:06.621 15:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:06.621 15:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:06.621 15:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:07.191 15:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:07.191 15:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3278740 00:31:07.191 15:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3278740 ']' 00:31:07.191 15:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3278740 00:31:07.191 15:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:31:07.191 15:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:07.191 15:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3278740 00:31:07.191 15:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:31:07.191 15:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:31:07.191 15:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3278740' 00:31:07.191 killing process with pid 3278740 00:31:07.191 15:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3278740 00:31:07.191 15:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3278740 00:31:07.191 { 00:31:07.191 "results": [ 00:31:07.191 { 00:31:07.191 "job": "Nvme0n1", 00:31:07.191 "core_mask": "0x4", 00:31:07.191 "workload": "verify", 00:31:07.191 "status": "terminated", 00:31:07.191 "verify_range": { 00:31:07.191 "start": 0, 00:31:07.191 "length": 16384 00:31:07.191 }, 00:31:07.191 "queue_depth": 128, 00:31:07.191 "io_size": 4096, 00:31:07.191 "runtime": 46.330558, 00:31:07.191 "iops": 4359.476956871532, 00:31:07.191 "mibps": 17.02920686277942, 00:31:07.191 "io_failed": 0, 00:31:07.191 "io_timeout": 0, 00:31:07.191 "avg_latency_us": 29309.49830601541, 00:31:07.191 "min_latency_us": 312.5096296296296, 00:31:07.191 "max_latency_us": 6039797.76 00:31:07.191 } 00:31:07.191 ], 00:31:07.191 "core_count": 1 00:31:07.191 } 00:31:07.458 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3278740 00:31:07.458 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:07.458 [2024-10-28 15:26:05.435137] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:31:07.458 [2024-10-28 15:26:05.435232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3278740 ] 00:31:07.458 [2024-10-28 15:26:05.551445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.458 [2024-10-28 15:26:05.655779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:07.458 Running I/O for 90 seconds... 00:31:07.458 6050.00 IOPS, 23.63 MiB/s [2024-10-28T14:26:54.325Z] 5280.50 IOPS, 20.63 MiB/s [2024-10-28T14:26:54.325Z] 5024.33 IOPS, 19.63 MiB/s [2024-10-28T14:26:54.325Z] 4878.75 IOPS, 19.06 MiB/s [2024-10-28T14:26:54.325Z] 4836.00 IOPS, 18.89 MiB/s [2024-10-28T14:26:54.325Z] 4800.33 IOPS, 18.75 MiB/s [2024-10-28T14:26:54.325Z] 4775.71 IOPS, 18.66 MiB/s [2024-10-28T14:26:54.325Z] 4726.38 IOPS, 18.46 MiB/s [2024-10-28T14:26:54.325Z] 4704.33 IOPS, 18.38 MiB/s [2024-10-28T14:26:54.325Z] 4698.90 IOPS, 18.36 MiB/s [2024-10-28T14:26:54.325Z] 4673.36 IOPS, 18.26 MiB/s [2024-10-28T14:26:54.325Z] 4656.33 IOPS, 18.19 MiB/s [2024-10-28T14:26:54.325Z] 4650.85 IOPS, 18.17 MiB/s [2024-10-28T14:26:54.325Z] 4643.79 IOPS, 18.14 MiB/s [2024-10-28T14:26:54.325Z] 4644.00 IOPS, 18.14 MiB/s [2024-10-28T14:26:54.325Z] 4631.88 IOPS, 18.09 MiB/s [2024-10-28T14:26:54.325Z] 4627.29 IOPS, 18.08 MiB/s [2024-10-28T14:26:54.325Z] 4625.50 IOPS, 18.07 MiB/s [2024-10-28T14:26:54.325Z] 4670.21 IOPS, 18.24 MiB/s [2024-10-28T14:26:54.325Z] [2024-10-28 15:26:26.963422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.963507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.963606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.963632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.963669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.963697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.963725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.963754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.963781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.963800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.963826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.963845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.963871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.963900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.963926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.963944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.963980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.963999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.964039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.964059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.964086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.964105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.964131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.964149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.964176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.964195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.964221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.964240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.964266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.964284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.964311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.964330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.964357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.964376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.964401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.964420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.965157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.965195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.965228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.965248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.965276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.965296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.965330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.965350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.965377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.965396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.965433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.965452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.965479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.965498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.965525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.965544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.965571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.965590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.965617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.965635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.965672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.965697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.965724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.965743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.965770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.965789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.965816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.965835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.965862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.458 [2024-10-28 15:26:26.965880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:07.458 [2024-10-28 15:26:26.965907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.965931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.965959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.965977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.966004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.966022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.966049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.966067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.966094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.966113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.966140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.966158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.966184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.966203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.966229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.966247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.966274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.966293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.966320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.966338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.966365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.966383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.966410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.966430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.966457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.966480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.966508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.966527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.966555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.966574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.966601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.966620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.966647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.966675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.966798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.966821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.966864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.966884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.966914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.966933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.966962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.966980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.967009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.967027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.967056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.967074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.967103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.967121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.967151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.967170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.967205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.967224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.967254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.967273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.967302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.967321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.967349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.967368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.967398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.967416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.967445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.967463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.967492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.967511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.967540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.967558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.967587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.967605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.967635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.967660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.967692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.967711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.967740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.967759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.967793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.967812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.967842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.459 [2024-10-28 15:26:26.967861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.967890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.459 [2024-10-28 15:26:26.967909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.967938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.459 [2024-10-28 15:26:26.967956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.967986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.968010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.968040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.968058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.968087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.968105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.968134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.968153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.968183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.968201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.968229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.459 [2024-10-28 15:26:26.968248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:07.459 [2024-10-28 15:26:26.968277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.968296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.968325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.968344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.968373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.968395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.968425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.968444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.968473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.968492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.968521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.968539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.968568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.968586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.968615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.968634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.968675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.968697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.968726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.968745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.968774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.968792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.968822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.968843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.968984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.969008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.969044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.969064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.969097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.969121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.969155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.969174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.969208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.969228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.969260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.969279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.969318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.969337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.969370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.969389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.969421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.969440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.969472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.969491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.969523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.969542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.969574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.969593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.969625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.969644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.969687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.969707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.969740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.969760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.969797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.969817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.969849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.969868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.969900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.969919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.969951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.969970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.970002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.460 [2024-10-28 15:26:26.970021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.970054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.460 [2024-10-28 15:26:26.970073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.970106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.460 [2024-10-28 15:26:26.970124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.970156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.460 [2024-10-28 15:26:26.970175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.970208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.460 [2024-10-28 15:26:26.970226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.970258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.460 [2024-10-28 15:26:26.970276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.970309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.460 [2024-10-28 15:26:26.970328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.970360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.460 [2024-10-28 15:26:26.970379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.970416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.460 [2024-10-28 15:26:26.970436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.970468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.460 [2024-10-28 15:26:26.970487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.970519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.460 [2024-10-28 15:26:26.970538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.970570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.460 [2024-10-28 15:26:26.970589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.970621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.460 [2024-10-28 15:26:26.970639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.970679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.460 [2024-10-28 15:26:26.970699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.970732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.460 [2024-10-28 15:26:26.970751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:07.460 [2024-10-28 15:26:26.970783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.460 [2024-10-28 15:26:26.970802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:07.460 4591.90 IOPS, 17.94 MiB/s [2024-10-28T14:26:54.327Z] 4373.24 IOPS, 17.08 MiB/s [2024-10-28T14:26:54.327Z] 4174.45 IOPS, 16.31 MiB/s [2024-10-28T14:26:54.327Z] 3992.96 IOPS, 15.60 MiB/s [2024-10-28T14:26:54.327Z] 3826.58 IOPS, 14.95 MiB/s [2024-10-28T14:26:54.327Z] 3673.52 IOPS, 14.35 MiB/s [2024-10-28T14:26:54.327Z] 3622.92 IOPS, 14.15 MiB/s [2024-10-28T14:26:54.327Z] 3656.26 IOPS, 14.28 MiB/s [2024-10-28T14:26:54.327Z] 3687.00 IOPS, 14.40 MiB/s [2024-10-28T14:26:54.327Z] 3715.52 IOPS, 14.51 MiB/s [2024-10-28T14:26:54.328Z] 3786.60 IOPS, 14.79 MiB/s [2024-10-28T14:26:54.328Z] 3868.84 IOPS, 15.11 MiB/s [2024-10-28T14:26:54.328Z] 3950.44 IOPS, 15.43 MiB/s [2024-10-28T14:26:54.328Z] 4025.61 IOPS, 15.73 MiB/s [2024-10-28T14:26:54.328Z] 4083.85 IOPS, 15.95 MiB/s [2024-10-28T14:26:54.328Z] 4092.83 IOPS, 15.99 MiB/s [2024-10-28T14:26:54.328Z] 4104.22 IOPS, 16.03 MiB/s [2024-10-28T14:26:54.328Z] 4115.24 IOPS, 16.08 MiB/s [2024-10-28T14:26:54.328Z] 4129.00 IOPS, 16.13 MiB/s [2024-10-28T14:26:54.328Z] 4163.79 IOPS, 16.26 MiB/s [2024-10-28T14:26:54.328Z] 4215.80 IOPS, 16.47 MiB/s [2024-10-28T14:26:54.328Z] 4262.37 IOPS, 16.65 MiB/s [2024-10-28T14:26:54.328Z] 4306.95 IOPS, 16.82 MiB/s [2024-10-28T14:26:54.328Z] [2024-10-28 15:26:50.379716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:46976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.461 [2024-10-28 15:26:50.379788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.379864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:47008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.461 [2024-10-28 15:26:50.379889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.379934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:47040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.461 [2024-10-28 15:26:50.379955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.379982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:47072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.461 [2024-10-28 15:26:50.380001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.380027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:47104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.461 [2024-10-28 15:26:50.380046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.380073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:47592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.461 [2024-10-28 15:26:50.380092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.380119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.461 [2024-10-28 15:26:50.380138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.380164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.461 [2024-10-28 15:26:50.380183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.380208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:47640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.461 [2024-10-28 15:26:50.380227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.380253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:47656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.461 [2024-10-28 15:26:50.380272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.380298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.461 [2024-10-28 15:26:50.380316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.380343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:47688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.461 [2024-10-28 15:26:50.380361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.380387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:47152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.461 [2024-10-28 15:26:50.380407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.380433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:47184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.461 [2024-10-28 15:26:50.380452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.380483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.461 [2024-10-28 15:26:50.380503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.380529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:47256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.461 [2024-10-28 15:26:50.380548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.380574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:47288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.461 [2024-10-28 15:26:50.380592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.380618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:46984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.461 [2024-10-28 15:26:50.380637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.380674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:47016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.461 [2024-10-28 15:26:50.380696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.380722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.461 [2024-10-28 15:26:50.380741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.380767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:47704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.461 [2024-10-28 15:26:50.380786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.380812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.461 [2024-10-28 15:26:50.380831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.380857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:47080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.461 [2024-10-28 15:26:50.380875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.380901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.461 [2024-10-28 15:26:50.380919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.380945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:47144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.461 [2024-10-28 15:26:50.380964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.380999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.461 [2024-10-28 15:26:50.381017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.381044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:47208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.461 [2024-10-28 15:26:50.381069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.381096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:47232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.461 [2024-10-28 15:26:50.381115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.381140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:47264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.461 [2024-10-28 15:26:50.381159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.381185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:47296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.461 [2024-10-28 15:26:50.381203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.382338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:47320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.461 [2024-10-28 15:26:50.382367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.382399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:47352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.461 [2024-10-28 15:26:50.382420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.382447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.461 [2024-10-28 15:26:50.382465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.382498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.461 [2024-10-28 15:26:50.382517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.382543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:47744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.461 [2024-10-28 15:26:50.382562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.382588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:47760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.461 [2024-10-28 15:26:50.382607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.382633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.461 [2024-10-28 15:26:50.382659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:07.461 [2024-10-28 15:26:50.382688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:47416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.462 [2024-10-28 15:26:50.382708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:07.462 [2024-10-28 15:26:50.382734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:47448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.462 [2024-10-28 15:26:50.382759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:07.462 [2024-10-28 15:26:50.382786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:47480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.462 [2024-10-28 15:26:50.382805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:07.462 [2024-10-28 15:26:50.382830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:47512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.462 [2024-10-28 15:26:50.382849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:07.462 [2024-10-28 15:26:50.382875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:47544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.462 [2024-10-28 15:26:50.382894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:07.462 [2024-10-28 15:26:50.382921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.462 [2024-10-28 15:26:50.382939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:07.462 [2024-10-28 15:26:50.382964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.462 [2024-10-28 15:26:50.382982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:07.462 [2024-10-28 15:26:50.383008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.462 [2024-10-28 15:26:50.383026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:07.462 [2024-10-28 15:26:50.383052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.462 [2024-10-28 15:26:50.383070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:07.462 [2024-10-28 15:26:50.383096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:47344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.462 [2024-10-28 15:26:50.383114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:07.462 [2024-10-28 15:26:50.383140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.462 [2024-10-28 15:26:50.383158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:07.462 [2024-10-28 15:26:50.383183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.462 [2024-10-28 15:26:50.383201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:07.462 [2024-10-28 15:26:50.383227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:47408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.462 [2024-10-28 15:26:50.383245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:07.462 [2024-10-28 15:26:50.383271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:47440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.462 [2024-10-28 15:26:50.383289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:07.462 [2024-10-28 15:26:50.383320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:47472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.462 [2024-10-28 15:26:50.383339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:07.462 [2024-10-28 15:26:50.383365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:47504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.462 [2024-10-28 15:26:50.383383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:07.462 [2024-10-28 15:26:50.383408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.462 [2024-10-28 15:26:50.383427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:07.462 [2024-10-28 15:26:50.383454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:47568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.462 [2024-10-28 15:26:50.383473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:07.462 [2024-10-28 15:26:50.384035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.462 [2024-10-28 15:26:50.384061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:07.462 [2024-10-28 15:26:50.384092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.462 [2024-10-28 15:26:50.384112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:07.462 [2024-10-28 15:26:50.384138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:47880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.462 [2024-10-28 15:26:50.384157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:07.462 [2024-10-28 15:26:50.384182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.462 [2024-10-28 15:26:50.384200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:07.462 [2024-10-28 15:26:50.384227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.462 [2024-10-28 15:26:50.384245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:07.462 4349.35 IOPS, 16.99 MiB/s [2024-10-28T14:26:54.329Z] 4351.52 IOPS, 17.00 MiB/s [2024-10-28T14:26:54.329Z] 4358.78 IOPS, 17.03 MiB/s [2024-10-28T14:26:54.329Z] 4361.26 IOPS, 17.04 MiB/s [2024-10-28T14:26:54.329Z] Received shutdown signal, test time was about 46.332091 seconds 00:31:07.462 00:31:07.462 Latency(us) 00:31:07.462 [2024-10-28T14:26:54.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:07.462 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:07.462 Verification LBA range: start 0x0 length 0x4000 00:31:07.462 Nvme0n1 : 46.33 4359.48 17.03 0.00 0.00 29309.50 312.51 6039797.76 00:31:07.462 [2024-10-28T14:26:54.329Z] =================================================================================================================== 00:31:07.462 [2024-10-28T14:26:54.329Z] Total : 4359.48 17.03 0.00 0.00 29309.50 312.51 6039797.76 00:31:07.462 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:07.722 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:07.722 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:07.722 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:07.722 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:07.722 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:31:07.722 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:07.722 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:31:07.722 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:07.722 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:07.722 rmmod nvme_tcp 00:31:07.722 rmmod nvme_fabrics 00:31:07.983 rmmod nvme_keyring 00:31:07.983 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:07.983 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:31:07.983 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:31:07.983 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3278196 ']' 00:31:07.983 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3278196 00:31:07.983 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3278196 ']' 00:31:07.983 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3278196 00:31:07.983 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:31:07.983 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:07.983 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3278196 00:31:07.984 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:07.984 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:07.984 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3278196' 00:31:07.984 killing process with pid 3278196 00:31:07.984 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3278196 00:31:07.984 15:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3278196 00:31:08.245 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:08.245 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:08.245 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:08.245 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:31:08.245 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:31:08.245 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:08.245 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:31:08.245 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:08.245 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:08.245 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.245 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:08.245 15:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.789 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:10.789 00:31:10.789 real 0m59.785s 00:31:10.789 user 3m6.028s 00:31:10.789 sys 0m15.364s 00:31:10.789 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:10.789 15:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:10.789 ************************************ 00:31:10.789 END TEST nvmf_host_multipath_status 00:31:10.789 ************************************ 00:31:10.789 15:26:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:10.789 15:26:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:10.789 15:26:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:10.789 15:26:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.789 ************************************ 00:31:10.789 START TEST nvmf_discovery_remove_ifc 00:31:10.789 ************************************ 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:10.790 * Looking for test storage... 00:31:10.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1689 -- # lcov --version 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:31:10.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.790 --rc genhtml_branch_coverage=1 00:31:10.790 --rc genhtml_function_coverage=1 00:31:10.790 --rc genhtml_legend=1 00:31:10.790 --rc geninfo_all_blocks=1 00:31:10.790 --rc geninfo_unexecuted_blocks=1 00:31:10.790 00:31:10.790 ' 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:31:10.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.790 --rc genhtml_branch_coverage=1 00:31:10.790 --rc genhtml_function_coverage=1 00:31:10.790 --rc genhtml_legend=1 00:31:10.790 --rc geninfo_all_blocks=1 00:31:10.790 --rc geninfo_unexecuted_blocks=1 00:31:10.790 00:31:10.790 ' 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:31:10.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.790 --rc genhtml_branch_coverage=1 00:31:10.790 --rc genhtml_function_coverage=1 00:31:10.790 --rc genhtml_legend=1 00:31:10.790 --rc geninfo_all_blocks=1 00:31:10.790 --rc geninfo_unexecuted_blocks=1 00:31:10.790 00:31:10.790 ' 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:31:10.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.790 --rc genhtml_branch_coverage=1 00:31:10.790 --rc genhtml_function_coverage=1 00:31:10.790 --rc genhtml_legend=1 00:31:10.790 --rc geninfo_all_blocks=1 00:31:10.790 --rc geninfo_unexecuted_blocks=1 00:31:10.790 00:31:10.790 ' 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:10.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:31:10.790 15:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:14.084 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:14.084 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:31:14.084 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:14.084 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:14.084 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:14.084 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:14.084 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:14.084 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:31:14.084 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:14.084 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:31:14.084 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:31:14.084 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:31:14.084 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:31:14.084 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:31:14.084 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:31:14.084 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:14.084 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:14.084 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:14.084 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:14.084 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:14.084 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:14.084 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:14.084 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:14.084 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:14.084 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:14.084 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:14.085 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:14.085 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:14.085 Found net devices under 0000:84:00.0: cvl_0_0 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:14.085 Found net devices under 0000:84:00.1: cvl_0_1 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:14.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:14.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:31:14.085 00:31:14.085 --- 10.0.0.2 ping statistics --- 00:31:14.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:14.085 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:14.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:14.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:31:14.085 00:31:14.085 --- 10.0.0.1 ping statistics --- 00:31:14.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:14.085 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3286674 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3286674 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3286674 ']' 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:14.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:14.085 15:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:14.085 [2024-10-28 15:27:00.636230] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:31:14.085 [2024-10-28 15:27:00.636410] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:14.085 [2024-10-28 15:27:00.809169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.085 [2024-10-28 15:27:00.931332] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:14.085 [2024-10-28 15:27:00.931458] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:14.085 [2024-10-28 15:27:00.931496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:14.086 [2024-10-28 15:27:00.931535] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:14.086 [2024-10-28 15:27:00.931547] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:14.086 [2024-10-28 15:27:00.932410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:14.656 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:14.656 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:31:14.656 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:14.656 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:14.656 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:14.656 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:14.656 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:14.656 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.656 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:14.656 [2024-10-28 15:27:01.280004] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:14.656 [2024-10-28 15:27:01.288523] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:14.656 null0 00:31:14.656 [2024-10-28 15:27:01.321392] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:14.656 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.656 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3286732 00:31:14.656 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3286732 /tmp/host.sock 00:31:14.656 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3286732 ']' 00:31:14.656 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:31:14.656 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:14.656 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:14.656 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:14.656 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:14.656 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:14.656 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:14.656 [2024-10-28 15:27:01.444894] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:31:14.656 [2024-10-28 15:27:01.445053] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3286732 ] 00:31:14.916 [2024-10-28 15:27:01.556242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.916 [2024-10-28 15:27:01.623957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.916 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:14.916 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:31:14.916 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:14.916 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:14.916 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.916 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:14.916 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.916 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:14.916 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.916 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:15.177 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.177 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:15.177 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.177 15:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:16.117 [2024-10-28 15:27:02.932159] bdev_nvme.c:7291:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:16.117 [2024-10-28 15:27:02.932236] bdev_nvme.c:7377:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:16.117 [2024-10-28 15:27:02.932291] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:16.376 [2024-10-28 15:27:03.060880] bdev_nvme.c:7220:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:16.376 [2024-10-28 15:27:03.239780] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:31:16.376 [2024-10-28 15:27:03.241753] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x9765d0:1 started. 00:31:16.638 [2024-10-28 15:27:03.245544] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:16.638 [2024-10-28 15:27:03.245720] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:16.638 [2024-10-28 15:27:03.245786] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:16.638 [2024-10-28 15:27:03.245819] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:16.638 [2024-10-28 15:27:03.245856] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:16.638 15:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.638 15:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:16.638 15:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:16.638 [2024-10-28 15:27:03.248619] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x9765d0 was disconnected and freed. delete nvme_qpair. 00:31:16.638 15:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:16.638 15:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.638 15:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:16.638 15:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:16.638 15:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:16.638 15:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:16.638 15:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.638 15:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:16.638 15:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:16.638 15:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:16.638 15:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:16.638 15:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:16.638 15:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:16.638 15:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.638 15:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:16.638 15:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:16.638 15:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:16.638 15:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:16.638 15:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.638 15:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:16.638 15:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:18.019 15:27:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:18.019 15:27:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:18.019 15:27:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.019 15:27:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:18.019 15:27:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:18.019 15:27:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:18.019 15:27:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:18.019 15:27:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.019 15:27:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:18.019 15:27:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:18.952 15:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:18.952 15:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:18.952 15:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:18.952 15:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.952 15:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:18.952 15:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:18.952 15:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:18.952 15:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.952 15:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:18.952 15:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:19.888 15:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:19.888 15:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:19.888 15:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:19.888 15:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.888 15:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:19.888 15:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:19.888 15:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:19.888 15:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.888 15:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:19.888 15:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:20.833 15:27:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:20.833 15:27:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:20.833 15:27:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.833 15:27:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:20.833 15:27:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:20.833 15:27:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:20.833 15:27:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:20.833 15:27:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.833 15:27:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:20.833 15:27:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:22.216 [2024-10-28 15:27:08.684853] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:22.216 [2024-10-28 15:27:08.685006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.216 [2024-10-28 15:27:08.685033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.216 [2024-10-28 15:27:08.685056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.216 [2024-10-28 15:27:08.685072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.216 [2024-10-28 15:27:08.685088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.216 [2024-10-28 15:27:08.685105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.216 [2024-10-28 15:27:08.685121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.216 [2024-10-28 15:27:08.685137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.216 [2024-10-28 15:27:08.685153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.216 [2024-10-28 15:27:08.685179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.216 [2024-10-28 15:27:08.685195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952f10 is same with the state(6) to be set 00:31:22.216 [2024-10-28 15:27:08.694873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x952f10 (9): Bad file descriptor 00:31:22.216 15:27:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:22.216 15:27:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:22.216 15:27:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:22.216 15:27:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.216 [2024-10-28 15:27:08.704906] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpa 15:27:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:22.216 irs for reset. 00:31:22.216 [2024-10-28 15:27:08.704937] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:22.216 [2024-10-28 15:27:08.704949] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:22.216 [2024-10-28 15:27:08.704959] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:22.216 [2024-10-28 15:27:08.705058] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:22.216 15:27:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:22.216 15:27:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:23.155 [2024-10-28 15:27:09.734747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:23.155 [2024-10-28 15:27:09.734823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x952f10 with addr=10.0.0.2, port=4420 00:31:23.155 [2024-10-28 15:27:09.734855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952f10 is same with the state(6) to be set 00:31:23.155 [2024-10-28 15:27:09.734925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x952f10 (9): Bad file descriptor 00:31:23.155 [2024-10-28 15:27:09.735432] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:31:23.155 [2024-10-28 15:27:09.735555] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:23.155 [2024-10-28 15:27:09.735576] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:23.155 [2024-10-28 15:27:09.735596] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:23.155 [2024-10-28 15:27:09.735612] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:23.155 [2024-10-28 15:27:09.735624] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:23.155 [2024-10-28 15:27:09.735675] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:23.155 [2024-10-28 15:27:09.735710] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:23.155 [2024-10-28 15:27:09.735722] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:23.155 15:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.155 15:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:23.155 15:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:24.105 [2024-10-28 15:27:10.738237] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:24.105 [2024-10-28 15:27:10.738342] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:24.105 [2024-10-28 15:27:10.738405] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:24.105 [2024-10-28 15:27:10.738442] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:24.105 [2024-10-28 15:27:10.738484] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:31:24.105 [2024-10-28 15:27:10.738500] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:24.105 [2024-10-28 15:27:10.738513] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:24.105 [2024-10-28 15:27:10.738543] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:24.105 [2024-10-28 15:27:10.738589] bdev_nvme.c:7042:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:24.105 [2024-10-28 15:27:10.738644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:24.105 [2024-10-28 15:27:10.738678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:24.105 [2024-10-28 15:27:10.738715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:24.105 [2024-10-28 15:27:10.738730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:24.105 [2024-10-28 15:27:10.738746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:24.105 [2024-10-28 15:27:10.738761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:24.105 [2024-10-28 15:27:10.738777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:24.105 [2024-10-28 15:27:10.738801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:24.105 [2024-10-28 15:27:10.738817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:24.105 [2024-10-28 15:27:10.738832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:24.105 [2024-10-28 15:27:10.738848] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:31:24.105 [2024-10-28 15:27:10.738911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x942240 (9): Bad file descriptor 00:31:24.105 [2024-10-28 15:27:10.739897] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:24.105 [2024-10-28 15:27:10.739925] nvme_ctrlr.c:1190:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:31:24.105 15:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:24.105 15:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:24.105 15:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:24.105 15:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.105 15:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:24.105 15:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:24.105 15:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:24.105 15:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.105 15:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:24.105 15:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:24.105 15:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:24.105 15:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:24.105 15:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:24.105 15:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:24.105 15:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:24.105 15:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.105 15:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:24.105 15:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:24.105 15:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:24.105 15:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.105 15:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:24.106 15:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:25.082 15:27:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:25.082 15:27:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:25.082 15:27:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:25.082 15:27:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.082 15:27:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:25.082 15:27:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:25.082 15:27:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:25.082 15:27:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.342 15:27:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:25.342 15:27:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:25.910 [2024-10-28 15:27:12.758428] bdev_nvme.c:7291:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:25.910 [2024-10-28 15:27:12.758501] bdev_nvme.c:7377:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:25.910 [2024-10-28 15:27:12.758556] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:26.171 [2024-10-28 15:27:12.845858] bdev_nvme.c:7220:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:26.171 [2024-10-28 15:27:12.946184] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:31:26.171 [2024-10-28 15:27:12.947589] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x95d350:1 started. 00:31:26.171 [2024-10-28 15:27:12.950020] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:26.171 [2024-10-28 15:27:12.950129] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:26.171 [2024-10-28 15:27:12.950220] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:26.171 [2024-10-28 15:27:12.950275] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:26.171 [2024-10-28 15:27:12.950307] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:26.171 15:27:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:26.171 15:27:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:26.171 15:27:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:26.171 15:27:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.171 15:27:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:26.171 15:27:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:26.171 15:27:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:26.171 [2024-10-28 15:27:12.996346] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x95d350 was disconnected and freed. delete nvme_qpair. 00:31:26.171 15:27:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.431 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:26.431 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:26.431 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3286732 00:31:26.431 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3286732 ']' 00:31:26.431 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3286732 00:31:26.431 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:31:26.431 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:26.431 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3286732 00:31:26.431 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:26.431 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:26.431 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3286732' 00:31:26.431 killing process with pid 3286732 00:31:26.431 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3286732 00:31:26.431 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3286732 00:31:26.691 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:26.691 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:26.691 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:31:26.691 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:26.691 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:31:26.691 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:26.691 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:26.691 rmmod nvme_tcp 00:31:26.691 rmmod nvme_fabrics 00:31:26.691 rmmod nvme_keyring 00:31:26.691 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:26.691 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:31:26.691 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:31:26.691 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3286674 ']' 00:31:26.691 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3286674 00:31:26.691 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3286674 ']' 00:31:26.691 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3286674 00:31:26.691 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:31:26.691 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:26.691 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3286674 00:31:26.691 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:26.691 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:26.691 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3286674' 00:31:26.691 killing process with pid 3286674 00:31:26.691 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3286674 00:31:26.691 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3286674 00:31:27.261 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:27.261 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:27.261 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:27.261 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:31:27.261 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:31:27.261 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:27.261 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:31:27.261 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:27.261 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:27.261 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:27.261 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:27.261 15:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:29.175 15:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:29.175 00:31:29.175 real 0m18.746s 00:31:29.175 user 0m25.676s 00:31:29.175 sys 0m4.135s 00:31:29.175 15:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:29.175 15:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:29.175 ************************************ 00:31:29.175 END TEST nvmf_discovery_remove_ifc 00:31:29.175 ************************************ 00:31:29.175 15:27:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:29.175 15:27:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:29.175 15:27:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:29.175 15:27:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.175 ************************************ 00:31:29.175 START TEST nvmf_identify_kernel_target 00:31:29.175 ************************************ 00:31:29.175 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:29.435 * Looking for test storage... 00:31:29.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:29.435 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:31:29.435 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1689 -- # lcov --version 00:31:29.435 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:31:29.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.697 --rc genhtml_branch_coverage=1 00:31:29.697 --rc genhtml_function_coverage=1 00:31:29.697 --rc genhtml_legend=1 00:31:29.697 --rc geninfo_all_blocks=1 00:31:29.697 --rc geninfo_unexecuted_blocks=1 00:31:29.697 00:31:29.697 ' 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:31:29.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.697 --rc genhtml_branch_coverage=1 00:31:29.697 --rc genhtml_function_coverage=1 00:31:29.697 --rc genhtml_legend=1 00:31:29.697 --rc geninfo_all_blocks=1 00:31:29.697 --rc geninfo_unexecuted_blocks=1 00:31:29.697 00:31:29.697 ' 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:31:29.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.697 --rc genhtml_branch_coverage=1 00:31:29.697 --rc genhtml_function_coverage=1 00:31:29.697 --rc genhtml_legend=1 00:31:29.697 --rc geninfo_all_blocks=1 00:31:29.697 --rc geninfo_unexecuted_blocks=1 00:31:29.697 00:31:29.697 ' 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:31:29.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.697 --rc genhtml_branch_coverage=1 00:31:29.697 --rc genhtml_function_coverage=1 00:31:29.697 --rc genhtml_legend=1 00:31:29.697 --rc geninfo_all_blocks=1 00:31:29.697 --rc geninfo_unexecuted_blocks=1 00:31:29.697 00:31:29.697 ' 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:29.697 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:29.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:29.698 15:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:32.240 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:32.240 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:32.240 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:32.240 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:32.240 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:32.240 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:32.240 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:32.240 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:32.240 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:32.240 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:31:32.240 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:32.240 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:31:32.240 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:32.240 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:31:32.240 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:32.240 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:32.240 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:32.240 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:32.240 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:32.241 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:32.241 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:32.241 Found net devices under 0000:84:00.0: cvl_0_0 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:32.241 Found net devices under 0000:84:00.1: cvl_0_1 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:32.241 15:27:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:32.241 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:32.241 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:32.241 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:32.241 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:32.241 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:32.502 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:32.502 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:31:32.502 00:31:32.502 --- 10.0.0.2 ping statistics --- 00:31:32.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.502 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:32.502 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:32.502 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:31:32.502 00:31:32.502 --- 10.0.0.1 ping statistics --- 00:31:32.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.502 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:32.502 15:27:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:34.411 Waiting for block devices as requested 00:31:34.411 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:31:34.411 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:34.411 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:34.670 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:34.670 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:34.670 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:34.670 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:34.928 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:34.928 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:34.928 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:35.187 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:35.187 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:35.187 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:35.446 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:35.446 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:35.446 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:35.706 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:35.706 15:27:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:31:35.707 15:27:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:35.707 15:27:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:31:35.707 15:27:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1646 -- # local device=nvme0n1 00:31:35.707 15:27:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:35.707 15:27:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:31:35.707 15:27:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:31:35.707 15:27:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:31:35.707 15:27:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:35.966 No valid GPT data, bailing 00:31:35.967 15:27:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:35.967 15:27:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:31:35.967 15:27:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:31:35.967 15:27:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:31:35.967 15:27:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:31:35.967 15:27:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:35.967 15:27:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:35.967 15:27:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:35.967 15:27:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:35.967 15:27:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:31:35.967 15:27:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:31:35.967 15:27:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:31:35.967 15:27:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:31:35.967 15:27:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:31:35.967 15:27:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:31:35.967 15:27:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:31:35.967 15:27:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:35.967 15:27:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:31:35.967 00:31:35.967 Discovery Log Number of Records 2, Generation counter 2 00:31:35.967 =====Discovery Log Entry 0====== 00:31:35.967 trtype: tcp 00:31:35.967 adrfam: ipv4 00:31:35.967 subtype: current discovery subsystem 00:31:35.967 treq: not specified, sq flow control disable supported 00:31:35.967 portid: 1 00:31:35.967 trsvcid: 4420 00:31:35.967 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:35.967 traddr: 10.0.0.1 00:31:35.967 eflags: none 00:31:35.967 sectype: none 00:31:35.967 =====Discovery Log Entry 1====== 00:31:35.967 trtype: tcp 00:31:35.967 adrfam: ipv4 00:31:35.967 subtype: nvme subsystem 00:31:35.967 treq: not specified, sq flow control disable supported 00:31:35.967 portid: 1 00:31:35.967 trsvcid: 4420 00:31:35.967 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:35.967 traddr: 10.0.0.1 00:31:35.967 eflags: none 00:31:35.967 sectype: none 00:31:35.967 15:27:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:31:35.967 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:36.228 ===================================================== 00:31:36.228 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:36.228 ===================================================== 00:31:36.228 Controller Capabilities/Features 00:31:36.228 ================================ 00:31:36.228 Vendor ID: 0000 00:31:36.228 Subsystem Vendor ID: 0000 00:31:36.228 Serial Number: 04f1be8856fe068712b9 00:31:36.228 Model Number: Linux 00:31:36.228 Firmware Version: 6.8.9-20 00:31:36.228 Recommended Arb Burst: 0 00:31:36.228 IEEE OUI Identifier: 00 00 00 00:31:36.228 Multi-path I/O 00:31:36.228 May have multiple subsystem ports: No 00:31:36.228 May have multiple controllers: No 00:31:36.228 Associated with SR-IOV VF: No 00:31:36.228 Max Data Transfer Size: Unlimited 00:31:36.228 Max Number of Namespaces: 0 00:31:36.228 Max Number of I/O Queues: 1024 00:31:36.228 NVMe Specification Version (VS): 1.3 00:31:36.228 NVMe Specification Version (Identify): 1.3 00:31:36.229 Maximum Queue Entries: 1024 00:31:36.229 Contiguous Queues Required: No 00:31:36.229 Arbitration Mechanisms Supported 00:31:36.229 Weighted Round Robin: Not Supported 00:31:36.229 Vendor Specific: Not Supported 00:31:36.229 Reset Timeout: 7500 ms 00:31:36.229 Doorbell Stride: 4 bytes 00:31:36.229 NVM Subsystem Reset: Not Supported 00:31:36.229 Command Sets Supported 00:31:36.229 NVM Command Set: Supported 00:31:36.229 Boot Partition: Not Supported 00:31:36.229 Memory Page Size Minimum: 4096 bytes 00:31:36.229 Memory Page Size Maximum: 4096 bytes 00:31:36.229 Persistent Memory Region: Not Supported 00:31:36.229 Optional Asynchronous Events Supported 00:31:36.229 Namespace Attribute Notices: Not Supported 00:31:36.229 Firmware Activation Notices: Not Supported 00:31:36.229 ANA Change Notices: Not Supported 00:31:36.229 PLE Aggregate Log Change Notices: Not Supported 00:31:36.229 LBA Status Info Alert Notices: Not Supported 00:31:36.229 EGE Aggregate Log Change Notices: Not Supported 00:31:36.229 Normal NVM Subsystem Shutdown event: Not Supported 00:31:36.229 Zone Descriptor Change Notices: Not Supported 00:31:36.229 Discovery Log Change Notices: Supported 00:31:36.229 Controller Attributes 00:31:36.229 128-bit Host Identifier: Not Supported 00:31:36.229 Non-Operational Permissive Mode: Not Supported 00:31:36.229 NVM Sets: Not Supported 00:31:36.229 Read Recovery Levels: Not Supported 00:31:36.229 Endurance Groups: Not Supported 00:31:36.229 Predictable Latency Mode: Not Supported 00:31:36.229 Traffic Based Keep ALive: Not Supported 00:31:36.229 Namespace Granularity: Not Supported 00:31:36.229 SQ Associations: Not Supported 00:31:36.229 UUID List: Not Supported 00:31:36.229 Multi-Domain Subsystem: Not Supported 00:31:36.229 Fixed Capacity Management: Not Supported 00:31:36.229 Variable Capacity Management: Not Supported 00:31:36.229 Delete Endurance Group: Not Supported 00:31:36.229 Delete NVM Set: Not Supported 00:31:36.229 Extended LBA Formats Supported: Not Supported 00:31:36.229 Flexible Data Placement Supported: Not Supported 00:31:36.229 00:31:36.229 Controller Memory Buffer Support 00:31:36.229 ================================ 00:31:36.229 Supported: No 00:31:36.229 00:31:36.229 Persistent Memory Region Support 00:31:36.229 ================================ 00:31:36.229 Supported: No 00:31:36.229 00:31:36.229 Admin Command Set Attributes 00:31:36.229 ============================ 00:31:36.229 Security Send/Receive: Not Supported 00:31:36.229 Format NVM: Not Supported 00:31:36.229 Firmware Activate/Download: Not Supported 00:31:36.229 Namespace Management: Not Supported 00:31:36.229 Device Self-Test: Not Supported 00:31:36.229 Directives: Not Supported 00:31:36.229 NVMe-MI: Not Supported 00:31:36.229 Virtualization Management: Not Supported 00:31:36.229 Doorbell Buffer Config: Not Supported 00:31:36.229 Get LBA Status Capability: Not Supported 00:31:36.229 Command & Feature Lockdown Capability: Not Supported 00:31:36.229 Abort Command Limit: 1 00:31:36.229 Async Event Request Limit: 1 00:31:36.229 Number of Firmware Slots: N/A 00:31:36.229 Firmware Slot 1 Read-Only: N/A 00:31:36.229 Firmware Activation Without Reset: N/A 00:31:36.229 Multiple Update Detection Support: N/A 00:31:36.229 Firmware Update Granularity: No Information Provided 00:31:36.229 Per-Namespace SMART Log: No 00:31:36.229 Asymmetric Namespace Access Log Page: Not Supported 00:31:36.229 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:36.229 Command Effects Log Page: Not Supported 00:31:36.229 Get Log Page Extended Data: Supported 00:31:36.229 Telemetry Log Pages: Not Supported 00:31:36.229 Persistent Event Log Pages: Not Supported 00:31:36.229 Supported Log Pages Log Page: May Support 00:31:36.229 Commands Supported & Effects Log Page: Not Supported 00:31:36.229 Feature Identifiers & Effects Log Page:May Support 00:31:36.229 NVMe-MI Commands & Effects Log Page: May Support 00:31:36.229 Data Area 4 for Telemetry Log: Not Supported 00:31:36.229 Error Log Page Entries Supported: 1 00:31:36.229 Keep Alive: Not Supported 00:31:36.229 00:31:36.229 NVM Command Set Attributes 00:31:36.229 ========================== 00:31:36.229 Submission Queue Entry Size 00:31:36.229 Max: 1 00:31:36.229 Min: 1 00:31:36.229 Completion Queue Entry Size 00:31:36.229 Max: 1 00:31:36.229 Min: 1 00:31:36.229 Number of Namespaces: 0 00:31:36.229 Compare Command: Not Supported 00:31:36.229 Write Uncorrectable Command: Not Supported 00:31:36.229 Dataset Management Command: Not Supported 00:31:36.229 Write Zeroes Command: Not Supported 00:31:36.229 Set Features Save Field: Not Supported 00:31:36.229 Reservations: Not Supported 00:31:36.229 Timestamp: Not Supported 00:31:36.229 Copy: Not Supported 00:31:36.229 Volatile Write Cache: Not Present 00:31:36.229 Atomic Write Unit (Normal): 1 00:31:36.229 Atomic Write Unit (PFail): 1 00:31:36.229 Atomic Compare & Write Unit: 1 00:31:36.229 Fused Compare & Write: Not Supported 00:31:36.229 Scatter-Gather List 00:31:36.229 SGL Command Set: Supported 00:31:36.229 SGL Keyed: Not Supported 00:31:36.229 SGL Bit Bucket Descriptor: Not Supported 00:31:36.229 SGL Metadata Pointer: Not Supported 00:31:36.229 Oversized SGL: Not Supported 00:31:36.229 SGL Metadata Address: Not Supported 00:31:36.229 SGL Offset: Supported 00:31:36.229 Transport SGL Data Block: Not Supported 00:31:36.229 Replay Protected Memory Block: Not Supported 00:31:36.229 00:31:36.229 Firmware Slot Information 00:31:36.229 ========================= 00:31:36.229 Active slot: 0 00:31:36.229 00:31:36.229 00:31:36.229 Error Log 00:31:36.229 ========= 00:31:36.229 00:31:36.229 Active Namespaces 00:31:36.229 ================= 00:31:36.229 Discovery Log Page 00:31:36.229 ================== 00:31:36.229 Generation Counter: 2 00:31:36.229 Number of Records: 2 00:31:36.229 Record Format: 0 00:31:36.229 00:31:36.229 Discovery Log Entry 0 00:31:36.229 ---------------------- 00:31:36.229 Transport Type: 3 (TCP) 00:31:36.229 Address Family: 1 (IPv4) 00:31:36.229 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:36.229 Entry Flags: 00:31:36.229 Duplicate Returned Information: 0 00:31:36.229 Explicit Persistent Connection Support for Discovery: 0 00:31:36.229 Transport Requirements: 00:31:36.229 Secure Channel: Not Specified 00:31:36.229 Port ID: 1 (0x0001) 00:31:36.229 Controller ID: 65535 (0xffff) 00:31:36.229 Admin Max SQ Size: 32 00:31:36.229 Transport Service Identifier: 4420 00:31:36.229 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:36.229 Transport Address: 10.0.0.1 00:31:36.229 Discovery Log Entry 1 00:31:36.229 ---------------------- 00:31:36.229 Transport Type: 3 (TCP) 00:31:36.229 Address Family: 1 (IPv4) 00:31:36.229 Subsystem Type: 2 (NVM Subsystem) 00:31:36.229 Entry Flags: 00:31:36.229 Duplicate Returned Information: 0 00:31:36.229 Explicit Persistent Connection Support for Discovery: 0 00:31:36.229 Transport Requirements: 00:31:36.229 Secure Channel: Not Specified 00:31:36.229 Port ID: 1 (0x0001) 00:31:36.229 Controller ID: 65535 (0xffff) 00:31:36.229 Admin Max SQ Size: 32 00:31:36.229 Transport Service Identifier: 4420 00:31:36.229 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:36.229 Transport Address: 10.0.0.1 00:31:36.229 15:27:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:36.229 get_feature(0x01) failed 00:31:36.229 get_feature(0x02) failed 00:31:36.229 get_feature(0x04) failed 00:31:36.229 ===================================================== 00:31:36.229 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:36.229 ===================================================== 00:31:36.229 Controller Capabilities/Features 00:31:36.229 ================================ 00:31:36.229 Vendor ID: 0000 00:31:36.229 Subsystem Vendor ID: 0000 00:31:36.230 Serial Number: 7288165613168fde901d 00:31:36.230 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:36.230 Firmware Version: 6.8.9-20 00:31:36.230 Recommended Arb Burst: 6 00:31:36.230 IEEE OUI Identifier: 00 00 00 00:31:36.230 Multi-path I/O 00:31:36.230 May have multiple subsystem ports: Yes 00:31:36.230 May have multiple controllers: Yes 00:31:36.230 Associated with SR-IOV VF: No 00:31:36.230 Max Data Transfer Size: Unlimited 00:31:36.230 Max Number of Namespaces: 1024 00:31:36.230 Max Number of I/O Queues: 128 00:31:36.230 NVMe Specification Version (VS): 1.3 00:31:36.230 NVMe Specification Version (Identify): 1.3 00:31:36.230 Maximum Queue Entries: 1024 00:31:36.230 Contiguous Queues Required: No 00:31:36.230 Arbitration Mechanisms Supported 00:31:36.230 Weighted Round Robin: Not Supported 00:31:36.230 Vendor Specific: Not Supported 00:31:36.230 Reset Timeout: 7500 ms 00:31:36.230 Doorbell Stride: 4 bytes 00:31:36.230 NVM Subsystem Reset: Not Supported 00:31:36.230 Command Sets Supported 00:31:36.230 NVM Command Set: Supported 00:31:36.230 Boot Partition: Not Supported 00:31:36.230 Memory Page Size Minimum: 4096 bytes 00:31:36.230 Memory Page Size Maximum: 4096 bytes 00:31:36.230 Persistent Memory Region: Not Supported 00:31:36.230 Optional Asynchronous Events Supported 00:31:36.230 Namespace Attribute Notices: Supported 00:31:36.230 Firmware Activation Notices: Not Supported 00:31:36.230 ANA Change Notices: Supported 00:31:36.230 PLE Aggregate Log Change Notices: Not Supported 00:31:36.230 LBA Status Info Alert Notices: Not Supported 00:31:36.230 EGE Aggregate Log Change Notices: Not Supported 00:31:36.230 Normal NVM Subsystem Shutdown event: Not Supported 00:31:36.230 Zone Descriptor Change Notices: Not Supported 00:31:36.230 Discovery Log Change Notices: Not Supported 00:31:36.230 Controller Attributes 00:31:36.230 128-bit Host Identifier: Supported 00:31:36.230 Non-Operational Permissive Mode: Not Supported 00:31:36.230 NVM Sets: Not Supported 00:31:36.230 Read Recovery Levels: Not Supported 00:31:36.230 Endurance Groups: Not Supported 00:31:36.230 Predictable Latency Mode: Not Supported 00:31:36.230 Traffic Based Keep ALive: Supported 00:31:36.230 Namespace Granularity: Not Supported 00:31:36.230 SQ Associations: Not Supported 00:31:36.230 UUID List: Not Supported 00:31:36.230 Multi-Domain Subsystem: Not Supported 00:31:36.230 Fixed Capacity Management: Not Supported 00:31:36.230 Variable Capacity Management: Not Supported 00:31:36.230 Delete Endurance Group: Not Supported 00:31:36.230 Delete NVM Set: Not Supported 00:31:36.230 Extended LBA Formats Supported: Not Supported 00:31:36.230 Flexible Data Placement Supported: Not Supported 00:31:36.230 00:31:36.230 Controller Memory Buffer Support 00:31:36.230 ================================ 00:31:36.230 Supported: No 00:31:36.230 00:31:36.230 Persistent Memory Region Support 00:31:36.230 ================================ 00:31:36.230 Supported: No 00:31:36.230 00:31:36.230 Admin Command Set Attributes 00:31:36.230 ============================ 00:31:36.230 Security Send/Receive: Not Supported 00:31:36.230 Format NVM: Not Supported 00:31:36.230 Firmware Activate/Download: Not Supported 00:31:36.230 Namespace Management: Not Supported 00:31:36.230 Device Self-Test: Not Supported 00:31:36.230 Directives: Not Supported 00:31:36.230 NVMe-MI: Not Supported 00:31:36.230 Virtualization Management: Not Supported 00:31:36.230 Doorbell Buffer Config: Not Supported 00:31:36.230 Get LBA Status Capability: Not Supported 00:31:36.230 Command & Feature Lockdown Capability: Not Supported 00:31:36.230 Abort Command Limit: 4 00:31:36.230 Async Event Request Limit: 4 00:31:36.230 Number of Firmware Slots: N/A 00:31:36.230 Firmware Slot 1 Read-Only: N/A 00:31:36.230 Firmware Activation Without Reset: N/A 00:31:36.230 Multiple Update Detection Support: N/A 00:31:36.230 Firmware Update Granularity: No Information Provided 00:31:36.230 Per-Namespace SMART Log: Yes 00:31:36.230 Asymmetric Namespace Access Log Page: Supported 00:31:36.230 ANA Transition Time : 10 sec 00:31:36.230 00:31:36.230 Asymmetric Namespace Access Capabilities 00:31:36.230 ANA Optimized State : Supported 00:31:36.230 ANA Non-Optimized State : Supported 00:31:36.230 ANA Inaccessible State : Supported 00:31:36.230 ANA Persistent Loss State : Supported 00:31:36.230 ANA Change State : Supported 00:31:36.230 ANAGRPID is not changed : No 00:31:36.230 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:36.230 00:31:36.230 ANA Group Identifier Maximum : 128 00:31:36.230 Number of ANA Group Identifiers : 128 00:31:36.230 Max Number of Allowed Namespaces : 1024 00:31:36.230 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:36.230 Command Effects Log Page: Supported 00:31:36.230 Get Log Page Extended Data: Supported 00:31:36.230 Telemetry Log Pages: Not Supported 00:31:36.230 Persistent Event Log Pages: Not Supported 00:31:36.230 Supported Log Pages Log Page: May Support 00:31:36.230 Commands Supported & Effects Log Page: Not Supported 00:31:36.230 Feature Identifiers & Effects Log Page:May Support 00:31:36.230 NVMe-MI Commands & Effects Log Page: May Support 00:31:36.230 Data Area 4 for Telemetry Log: Not Supported 00:31:36.230 Error Log Page Entries Supported: 128 00:31:36.230 Keep Alive: Supported 00:31:36.230 Keep Alive Granularity: 1000 ms 00:31:36.230 00:31:36.230 NVM Command Set Attributes 00:31:36.230 ========================== 00:31:36.230 Submission Queue Entry Size 00:31:36.230 Max: 64 00:31:36.230 Min: 64 00:31:36.230 Completion Queue Entry Size 00:31:36.230 Max: 16 00:31:36.230 Min: 16 00:31:36.230 Number of Namespaces: 1024 00:31:36.230 Compare Command: Not Supported 00:31:36.230 Write Uncorrectable Command: Not Supported 00:31:36.230 Dataset Management Command: Supported 00:31:36.230 Write Zeroes Command: Supported 00:31:36.230 Set Features Save Field: Not Supported 00:31:36.230 Reservations: Not Supported 00:31:36.230 Timestamp: Not Supported 00:31:36.230 Copy: Not Supported 00:31:36.230 Volatile Write Cache: Present 00:31:36.230 Atomic Write Unit (Normal): 1 00:31:36.230 Atomic Write Unit (PFail): 1 00:31:36.230 Atomic Compare & Write Unit: 1 00:31:36.230 Fused Compare & Write: Not Supported 00:31:36.230 Scatter-Gather List 00:31:36.230 SGL Command Set: Supported 00:31:36.230 SGL Keyed: Not Supported 00:31:36.230 SGL Bit Bucket Descriptor: Not Supported 00:31:36.230 SGL Metadata Pointer: Not Supported 00:31:36.230 Oversized SGL: Not Supported 00:31:36.230 SGL Metadata Address: Not Supported 00:31:36.230 SGL Offset: Supported 00:31:36.230 Transport SGL Data Block: Not Supported 00:31:36.230 Replay Protected Memory Block: Not Supported 00:31:36.230 00:31:36.230 Firmware Slot Information 00:31:36.230 ========================= 00:31:36.230 Active slot: 0 00:31:36.230 00:31:36.230 Asymmetric Namespace Access 00:31:36.230 =========================== 00:31:36.230 Change Count : 0 00:31:36.230 Number of ANA Group Descriptors : 1 00:31:36.230 ANA Group Descriptor : 0 00:31:36.230 ANA Group ID : 1 00:31:36.230 Number of NSID Values : 1 00:31:36.230 Change Count : 0 00:31:36.230 ANA State : 1 00:31:36.230 Namespace Identifier : 1 00:31:36.230 00:31:36.230 Commands Supported and Effects 00:31:36.230 ============================== 00:31:36.230 Admin Commands 00:31:36.230 -------------- 00:31:36.230 Get Log Page (02h): Supported 00:31:36.230 Identify (06h): Supported 00:31:36.230 Abort (08h): Supported 00:31:36.230 Set Features (09h): Supported 00:31:36.230 Get Features (0Ah): Supported 00:31:36.230 Asynchronous Event Request (0Ch): Supported 00:31:36.230 Keep Alive (18h): Supported 00:31:36.230 I/O Commands 00:31:36.230 ------------ 00:31:36.230 Flush (00h): Supported 00:31:36.230 Write (01h): Supported LBA-Change 00:31:36.230 Read (02h): Supported 00:31:36.230 Write Zeroes (08h): Supported LBA-Change 00:31:36.230 Dataset Management (09h): Supported 00:31:36.230 00:31:36.230 Error Log 00:31:36.230 ========= 00:31:36.230 Entry: 0 00:31:36.230 Error Count: 0x3 00:31:36.230 Submission Queue Id: 0x0 00:31:36.230 Command Id: 0x5 00:31:36.230 Phase Bit: 0 00:31:36.230 Status Code: 0x2 00:31:36.230 Status Code Type: 0x0 00:31:36.230 Do Not Retry: 1 00:31:36.230 Error Location: 0x28 00:31:36.230 LBA: 0x0 00:31:36.230 Namespace: 0x0 00:31:36.230 Vendor Log Page: 0x0 00:31:36.230 ----------- 00:31:36.230 Entry: 1 00:31:36.230 Error Count: 0x2 00:31:36.230 Submission Queue Id: 0x0 00:31:36.230 Command Id: 0x5 00:31:36.230 Phase Bit: 0 00:31:36.230 Status Code: 0x2 00:31:36.230 Status Code Type: 0x0 00:31:36.230 Do Not Retry: 1 00:31:36.230 Error Location: 0x28 00:31:36.230 LBA: 0x0 00:31:36.230 Namespace: 0x0 00:31:36.230 Vendor Log Page: 0x0 00:31:36.230 ----------- 00:31:36.230 Entry: 2 00:31:36.231 Error Count: 0x1 00:31:36.231 Submission Queue Id: 0x0 00:31:36.231 Command Id: 0x4 00:31:36.231 Phase Bit: 0 00:31:36.231 Status Code: 0x2 00:31:36.231 Status Code Type: 0x0 00:31:36.231 Do Not Retry: 1 00:31:36.231 Error Location: 0x28 00:31:36.231 LBA: 0x0 00:31:36.231 Namespace: 0x0 00:31:36.231 Vendor Log Page: 0x0 00:31:36.231 00:31:36.231 Number of Queues 00:31:36.231 ================ 00:31:36.231 Number of I/O Submission Queues: 128 00:31:36.231 Number of I/O Completion Queues: 128 00:31:36.231 00:31:36.231 ZNS Specific Controller Data 00:31:36.231 ============================ 00:31:36.231 Zone Append Size Limit: 0 00:31:36.231 00:31:36.231 00:31:36.231 Active Namespaces 00:31:36.231 ================= 00:31:36.231 get_feature(0x05) failed 00:31:36.231 Namespace ID:1 00:31:36.231 Command Set Identifier: NVM (00h) 00:31:36.231 Deallocate: Supported 00:31:36.231 Deallocated/Unwritten Error: Not Supported 00:31:36.231 Deallocated Read Value: Unknown 00:31:36.231 Deallocate in Write Zeroes: Not Supported 00:31:36.231 Deallocated Guard Field: 0xFFFF 00:31:36.231 Flush: Supported 00:31:36.231 Reservation: Not Supported 00:31:36.231 Namespace Sharing Capabilities: Multiple Controllers 00:31:36.231 Size (in LBAs): 1953525168 (931GiB) 00:31:36.231 Capacity (in LBAs): 1953525168 (931GiB) 00:31:36.231 Utilization (in LBAs): 1953525168 (931GiB) 00:31:36.231 UUID: cc1216dd-0f58-4ef0-b0fd-43465583e2a3 00:31:36.231 Thin Provisioning: Not Supported 00:31:36.231 Per-NS Atomic Units: Yes 00:31:36.231 Atomic Boundary Size (Normal): 0 00:31:36.231 Atomic Boundary Size (PFail): 0 00:31:36.231 Atomic Boundary Offset: 0 00:31:36.231 NGUID/EUI64 Never Reused: No 00:31:36.231 ANA group ID: 1 00:31:36.231 Namespace Write Protected: No 00:31:36.231 Number of LBA Formats: 1 00:31:36.231 Current LBA Format: LBA Format #00 00:31:36.231 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:36.231 00:31:36.231 15:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:36.231 15:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:36.231 15:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:31:36.231 15:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:36.231 15:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:31:36.231 15:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:36.231 15:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:36.231 rmmod nvme_tcp 00:31:36.231 rmmod nvme_fabrics 00:31:36.231 15:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:36.231 15:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:31:36.231 15:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:31:36.231 15:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:36.231 15:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:36.231 15:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:36.231 15:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:36.231 15:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:31:36.231 15:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:31:36.231 15:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:36.231 15:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:36.231 15:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:36.231 15:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:36.231 15:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:36.231 15:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:36.231 15:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:38.771 15:27:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:38.771 15:27:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:31:38.771 15:27:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:38.771 15:27:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:31:38.771 15:27:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:38.771 15:27:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:38.771 15:27:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:38.771 15:27:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:38.771 15:27:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:31:38.771 15:27:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:31:38.771 15:27:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:40.151 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:40.151 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:40.151 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:40.151 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:40.151 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:40.151 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:40.151 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:40.151 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:40.151 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:40.151 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:40.411 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:40.411 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:40.411 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:40.411 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:40.411 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:40.411 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:41.355 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:31:41.355 00:31:41.355 real 0m12.014s 00:31:41.355 user 0m2.816s 00:31:41.355 sys 0m5.082s 00:31:41.355 15:27:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:41.355 15:27:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:41.355 ************************************ 00:31:41.355 END TEST nvmf_identify_kernel_target 00:31:41.355 ************************************ 00:31:41.355 15:27:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:41.355 15:27:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:41.355 15:27:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:41.355 15:27:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.355 ************************************ 00:31:41.355 START TEST nvmf_auth_host 00:31:41.355 ************************************ 00:31:41.355 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:41.355 * Looking for test storage... 00:31:41.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:41.355 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:31:41.355 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1689 -- # lcov --version 00:31:41.355 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:31:41.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.617 --rc genhtml_branch_coverage=1 00:31:41.617 --rc genhtml_function_coverage=1 00:31:41.617 --rc genhtml_legend=1 00:31:41.617 --rc geninfo_all_blocks=1 00:31:41.617 --rc geninfo_unexecuted_blocks=1 00:31:41.617 00:31:41.617 ' 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:31:41.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.617 --rc genhtml_branch_coverage=1 00:31:41.617 --rc genhtml_function_coverage=1 00:31:41.617 --rc genhtml_legend=1 00:31:41.617 --rc geninfo_all_blocks=1 00:31:41.617 --rc geninfo_unexecuted_blocks=1 00:31:41.617 00:31:41.617 ' 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:31:41.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.617 --rc genhtml_branch_coverage=1 00:31:41.617 --rc genhtml_function_coverage=1 00:31:41.617 --rc genhtml_legend=1 00:31:41.617 --rc geninfo_all_blocks=1 00:31:41.617 --rc geninfo_unexecuted_blocks=1 00:31:41.617 00:31:41.617 ' 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:31:41.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.617 --rc genhtml_branch_coverage=1 00:31:41.617 --rc genhtml_function_coverage=1 00:31:41.617 --rc genhtml_legend=1 00:31:41.617 --rc geninfo_all_blocks=1 00:31:41.617 --rc geninfo_unexecuted_blocks=1 00:31:41.617 00:31:41.617 ' 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:41.617 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:41.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:41.618 15:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:44.915 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:44.915 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:44.915 Found net devices under 0000:84:00.0: cvl_0_0 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:44.915 Found net devices under 0000:84:00.1: cvl_0_1 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:44.915 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:44.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:44.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:31:44.916 00:31:44.916 --- 10.0.0.2 ping statistics --- 00:31:44.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.916 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:44.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:44.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:31:44.916 00:31:44.916 --- 10.0.0.1 ping statistics --- 00:31:44.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.916 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3294668 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3294668 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3294668 ']' 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4769606807f80e866a9ecb04b8c51e34 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.3sO 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4769606807f80e866a9ecb04b8c51e34 0 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4769606807f80e866a9ecb04b8c51e34 0 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4769606807f80e866a9ecb04b8c51e34 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.3sO 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.3sO 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.3sO 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0a1e37af1524a0bceb3dbfb1414e410900248fbf7e6d778a4eecea8a6e0dbefe 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.sFf 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0a1e37af1524a0bceb3dbfb1414e410900248fbf7e6d778a4eecea8a6e0dbefe 3 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0a1e37af1524a0bceb3dbfb1414e410900248fbf7e6d778a4eecea8a6e0dbefe 3 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0a1e37af1524a0bceb3dbfb1414e410900248fbf7e6d778a4eecea8a6e0dbefe 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:31:44.916 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.sFf 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.sFf 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.sFf 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7b3658d8d896302808ed79529805196cca5d3ee776f67c5c 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.2dM 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7b3658d8d896302808ed79529805196cca5d3ee776f67c5c 0 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7b3658d8d896302808ed79529805196cca5d3ee776f67c5c 0 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7b3658d8d896302808ed79529805196cca5d3ee776f67c5c 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.2dM 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.2dM 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.2dM 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a64e8a32d4cd74980ebf70d52d17ddd3541d4800cb0c0ae3 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.DpU 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a64e8a32d4cd74980ebf70d52d17ddd3541d4800cb0c0ae3 2 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a64e8a32d4cd74980ebf70d52d17ddd3541d4800cb0c0ae3 2 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a64e8a32d4cd74980ebf70d52d17ddd3541d4800cb0c0ae3 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.DpU 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.DpU 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.DpU 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3de2ea7e66a6448af9e9b99ac6e98474 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.kvt 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3de2ea7e66a6448af9e9b99ac6e98474 1 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3de2ea7e66a6448af9e9b99ac6e98474 1 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3de2ea7e66a6448af9e9b99ac6e98474 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:31:45.177 15:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:45.177 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.kvt 00:31:45.177 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.kvt 00:31:45.177 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.kvt 00:31:45.177 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:45.177 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:45.177 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:45.177 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:45.177 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:31:45.177 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:31:45.177 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:45.177 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a01da7139b6966ab4ecb8f875febc45f 00:31:45.177 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.85P 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a01da7139b6966ab4ecb8f875febc45f 1 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a01da7139b6966ab4ecb8f875febc45f 1 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a01da7139b6966ab4ecb8f875febc45f 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.85P 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.85P 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.85P 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3f2a15ff3df433901edde5a833043a424155a8d1eff02e23 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.81O 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3f2a15ff3df433901edde5a833043a424155a8d1eff02e23 2 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3f2a15ff3df433901edde5a833043a424155a8d1eff02e23 2 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3f2a15ff3df433901edde5a833043a424155a8d1eff02e23 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.81O 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.81O 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.81O 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=00be1d90c612e0ee08029e1f654fc7a7 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.l0u 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 00be1d90c612e0ee08029e1f654fc7a7 0 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 00be1d90c612e0ee08029e1f654fc7a7 0 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=00be1d90c612e0ee08029e1f654fc7a7 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.l0u 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.l0u 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.l0u 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=10bfee6e7abfcada60837d5e112583d470a53320763e4d8c62085c4d1906bb3b 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.gR0 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 10bfee6e7abfcada60837d5e112583d470a53320763e4d8c62085c4d1906bb3b 3 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 10bfee6e7abfcada60837d5e112583d470a53320763e4d8c62085c4d1906bb3b 3 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=10bfee6e7abfcada60837d5e112583d470a53320763e4d8c62085c4d1906bb3b 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:31:45.441 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:45.702 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.gR0 00:31:45.702 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.gR0 00:31:45.702 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.gR0 00:31:45.702 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:31:45.702 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3294668 00:31:45.702 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3294668 ']' 00:31:45.702 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:45.702 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:45.702 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:45.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:45.702 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:45.702 15:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.273 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:46.273 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:31:46.273 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:46.273 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.3sO 00:31:46.273 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.273 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.273 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.273 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.sFf ]] 00:31:46.273 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sFf 00:31:46.273 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.273 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.273 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.273 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:46.273 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.2dM 00:31:46.273 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.273 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.273 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.273 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.DpU ]] 00:31:46.273 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.DpU 00:31:46.274 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.274 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.274 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.274 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:46.274 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.kvt 00:31:46.274 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.274 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.274 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.274 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.85P ]] 00:31:46.274 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.85P 00:31:46.274 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.81O 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.l0u ]] 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.l0u 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.gR0 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:31:46.535 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:31:46.536 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:46.536 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:46.536 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:46.536 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:31:46.536 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:31:46.536 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:31:46.536 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:46.536 15:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:47.919 Waiting for block devices as requested 00:31:47.919 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:31:48.179 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:48.179 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:48.440 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:48.440 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:48.700 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:48.700 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:48.700 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:48.961 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:48.961 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:48.961 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:48.961 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:49.222 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:49.222 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:49.222 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:49.484 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:49.484 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1646 -- # local device=nvme0n1 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:50.057 No valid GPT data, bailing 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:31:50.057 00:31:50.057 Discovery Log Number of Records 2, Generation counter 2 00:31:50.057 =====Discovery Log Entry 0====== 00:31:50.057 trtype: tcp 00:31:50.057 adrfam: ipv4 00:31:50.057 subtype: current discovery subsystem 00:31:50.057 treq: not specified, sq flow control disable supported 00:31:50.057 portid: 1 00:31:50.057 trsvcid: 4420 00:31:50.057 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:50.057 traddr: 10.0.0.1 00:31:50.057 eflags: none 00:31:50.057 sectype: none 00:31:50.057 =====Discovery Log Entry 1====== 00:31:50.057 trtype: tcp 00:31:50.057 adrfam: ipv4 00:31:50.057 subtype: nvme subsystem 00:31:50.057 treq: not specified, sq flow control disable supported 00:31:50.057 portid: 1 00:31:50.057 trsvcid: 4420 00:31:50.057 subnqn: nqn.2024-02.io.spdk:cnode0 00:31:50.057 traddr: 10.0.0.1 00:31:50.057 eflags: none 00:31:50.057 sectype: none 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:50.057 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: ]] 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.058 15:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.327 nvme0n1 00:31:50.327 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.327 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.327 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.327 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.327 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.327 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.327 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.327 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.327 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.327 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.327 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.327 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:50.327 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:50.327 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.327 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:31:50.327 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.327 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:50.327 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:50.327 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:50.327 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDc2OTYwNjgwN2Y4MGU4NjZhOWVjYjA0YjhjNTFlMzRpJt9J: 00:31:50.327 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: 00:31:50.328 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:50.328 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:50.328 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDc2OTYwNjgwN2Y4MGU4NjZhOWVjYjA0YjhjNTFlMzRpJt9J: 00:31:50.328 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: ]] 00:31:50.328 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: 00:31:50.328 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:31:50.328 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.328 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:50.328 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:50.328 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:50.328 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.328 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:50.328 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.328 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.328 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.328 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.328 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:50.328 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:50.328 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:50.328 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.328 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.328 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:50.328 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.328 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:50.328 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:50.328 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:50.328 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:50.328 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.328 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.665 nvme0n1 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: ]] 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:50.665 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:50.666 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:50.666 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.666 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.951 nvme0n1 00:31:50.951 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.951 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.951 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.951 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.951 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.951 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.951 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.951 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.951 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.951 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: ]] 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.952 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.213 nvme0n1 00:31:51.213 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.213 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.213 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.213 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.213 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.213 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.213 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.213 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.213 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.213 15:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2YyYTE1ZmYzZGY0MzM5MDFlZGRlNWE4MzMwNDNhNDI0MTU1YThkMWVmZjAyZTIzuE8o7g==: 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2YyYTE1ZmYzZGY0MzM5MDFlZGRlNWE4MzMwNDNhNDI0MTU1YThkMWVmZjAyZTIzuE8o7g==: 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: ]] 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.213 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.474 nvme0n1 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTBiZmVlNmU3YWJmY2FkYTYwODM3ZDVlMTEyNTgzZDQ3MGE1MzMyMDc2M2U0ZDhjNjIwODVjNGQxOTA2YmIzYv0+8qA=: 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTBiZmVlNmU3YWJmY2FkYTYwODM3ZDVlMTEyNTgzZDQ3MGE1MzMyMDc2M2U0ZDhjNjIwODVjNGQxOTA2YmIzYv0+8qA=: 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.474 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.735 nvme0n1 00:31:51.735 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.735 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.735 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.735 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.735 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.735 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.735 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.735 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.735 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.735 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDc2OTYwNjgwN2Y4MGU4NjZhOWVjYjA0YjhjNTFlMzRpJt9J: 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDc2OTYwNjgwN2Y4MGU4NjZhOWVjYjA0YjhjNTFlMzRpJt9J: 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: ]] 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.997 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.258 nvme0n1 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: ]] 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.258 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:52.259 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:52.259 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:52.259 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.259 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.259 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:52.259 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.259 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:52.259 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:52.259 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:52.259 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:52.259 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.259 15:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.520 nvme0n1 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: ]] 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.520 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.781 nvme0n1 00:31:52.781 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.781 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.781 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.781 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.781 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.781 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.781 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.781 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.781 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.781 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2YyYTE1ZmYzZGY0MzM5MDFlZGRlNWE4MzMwNDNhNDI0MTU1YThkMWVmZjAyZTIzuE8o7g==: 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2YyYTE1ZmYzZGY0MzM5MDFlZGRlNWE4MzMwNDNhNDI0MTU1YThkMWVmZjAyZTIzuE8o7g==: 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: ]] 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.042 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.303 nvme0n1 00:31:53.303 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.303 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.303 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.303 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.303 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.303 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.303 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.303 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.303 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.303 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.303 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.303 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.303 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:31:53.303 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.303 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:53.303 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:53.303 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:53.303 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTBiZmVlNmU3YWJmY2FkYTYwODM3ZDVlMTEyNTgzZDQ3MGE1MzMyMDc2M2U0ZDhjNjIwODVjNGQxOTA2YmIzYv0+8qA=: 00:31:53.303 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:53.303 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:53.304 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:53.304 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTBiZmVlNmU3YWJmY2FkYTYwODM3ZDVlMTEyNTgzZDQ3MGE1MzMyMDc2M2U0ZDhjNjIwODVjNGQxOTA2YmIzYv0+8qA=: 00:31:53.304 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:53.304 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:31:53.304 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.304 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:53.304 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:53.304 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:53.304 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.304 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:53.304 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.304 15:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.304 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.304 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.304 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:53.304 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:53.304 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:53.304 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.304 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.304 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:53.304 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.304 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:53.304 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:53.304 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:53.304 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:53.304 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.304 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.565 nvme0n1 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDc2OTYwNjgwN2Y4MGU4NjZhOWVjYjA0YjhjNTFlMzRpJt9J: 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDc2OTYwNjgwN2Y4MGU4NjZhOWVjYjA0YjhjNTFlMzRpJt9J: 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: ]] 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.565 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.138 nvme0n1 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: ]] 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.138 15:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.709 nvme0n1 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: ]] 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.709 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.969 nvme0n1 00:31:54.969 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.969 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.969 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.969 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.969 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.969 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2YyYTE1ZmYzZGY0MzM5MDFlZGRlNWE4MzMwNDNhNDI0MTU1YThkMWVmZjAyZTIzuE8o7g==: 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2YyYTE1ZmYzZGY0MzM5MDFlZGRlNWE4MzMwNDNhNDI0MTU1YThkMWVmZjAyZTIzuE8o7g==: 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: ]] 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.230 15:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.800 nvme0n1 00:31:55.800 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.800 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.800 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTBiZmVlNmU3YWJmY2FkYTYwODM3ZDVlMTEyNTgzZDQ3MGE1MzMyMDc2M2U0ZDhjNjIwODVjNGQxOTA2YmIzYv0+8qA=: 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTBiZmVlNmU3YWJmY2FkYTYwODM3ZDVlMTEyNTgzZDQ3MGE1MzMyMDc2M2U0ZDhjNjIwODVjNGQxOTA2YmIzYv0+8qA=: 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.801 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.372 nvme0n1 00:31:56.372 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.372 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.372 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.372 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.372 15:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDc2OTYwNjgwN2Y4MGU4NjZhOWVjYjA0YjhjNTFlMzRpJt9J: 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDc2OTYwNjgwN2Y4MGU4NjZhOWVjYjA0YjhjNTFlMzRpJt9J: 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: ]] 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.372 15:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.314 nvme0n1 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: ]] 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.314 15:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.257 nvme0n1 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: ]] 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:58.257 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:58.517 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.517 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.517 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:58.517 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.517 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:58.517 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:58.517 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:58.517 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:58.517 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.517 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.458 nvme0n1 00:31:59.458 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.458 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.458 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.458 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.458 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.458 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.458 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.458 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.458 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.459 15:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2YyYTE1ZmYzZGY0MzM5MDFlZGRlNWE4MzMwNDNhNDI0MTU1YThkMWVmZjAyZTIzuE8o7g==: 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2YyYTE1ZmYzZGY0MzM5MDFlZGRlNWE4MzMwNDNhNDI0MTU1YThkMWVmZjAyZTIzuE8o7g==: 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: ]] 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.459 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.401 nvme0n1 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTBiZmVlNmU3YWJmY2FkYTYwODM3ZDVlMTEyNTgzZDQ3MGE1MzMyMDc2M2U0ZDhjNjIwODVjNGQxOTA2YmIzYv0+8qA=: 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTBiZmVlNmU3YWJmY2FkYTYwODM3ZDVlMTEyNTgzZDQ3MGE1MzMyMDc2M2U0ZDhjNjIwODVjNGQxOTA2YmIzYv0+8qA=: 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.401 15:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.401 15:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.401 15:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.401 15:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:00.401 15:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:00.401 15:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:00.401 15:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.401 15:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.401 15:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:00.401 15:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.401 15:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:00.401 15:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:00.401 15:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:00.401 15:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:00.401 15:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.401 15:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.343 nvme0n1 00:32:01.343 15:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.343 15:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.343 15:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.343 15:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.343 15:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.343 15:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDc2OTYwNjgwN2Y4MGU4NjZhOWVjYjA0YjhjNTFlMzRpJt9J: 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDc2OTYwNjgwN2Y4MGU4NjZhOWVjYjA0YjhjNTFlMzRpJt9J: 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: ]] 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:01.343 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:01.344 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.344 15:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.254 nvme0n1 00:32:03.254 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.254 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.254 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.254 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.254 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.254 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.254 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.254 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.254 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.254 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.254 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.254 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.254 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:03.254 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.254 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:03.254 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:03.254 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:03.254 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:32:03.254 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:32:03.254 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:03.513 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:03.513 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:32:03.514 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: ]] 00:32:03.514 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:32:03.514 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:03.514 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.514 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:03.514 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:03.514 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:03.514 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.514 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:03.514 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.514 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.514 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.514 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.514 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:03.514 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:03.514 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:03.514 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.514 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.514 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:03.514 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.514 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:03.514 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:03.514 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:03.514 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:03.514 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.514 15:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.422 nvme0n1 00:32:05.422 15:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.422 15:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.422 15:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:05.422 15:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.422 15:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.422 15:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: ]] 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:05.422 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:05.423 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.423 15:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.333 nvme0n1 00:32:07.333 15:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.333 15:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.333 15:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.333 15:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.333 15:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.333 15:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.333 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.333 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.333 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.333 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.333 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.333 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:07.333 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:07.333 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.333 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:07.333 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:07.333 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:07.333 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2YyYTE1ZmYzZGY0MzM5MDFlZGRlNWE4MzMwNDNhNDI0MTU1YThkMWVmZjAyZTIzuE8o7g==: 00:32:07.333 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: 00:32:07.333 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:07.333 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:07.333 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2YyYTE1ZmYzZGY0MzM5MDFlZGRlNWE4MzMwNDNhNDI0MTU1YThkMWVmZjAyZTIzuE8o7g==: 00:32:07.333 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: ]] 00:32:07.333 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: 00:32:07.333 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:07.333 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.333 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:07.333 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:07.333 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:07.333 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.333 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:07.333 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.333 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.333 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.333 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.333 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:07.334 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:07.334 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:07.334 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.334 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.334 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:07.334 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:07.334 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:07.334 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:07.334 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:07.334 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:07.334 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.334 15:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.244 nvme0n1 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTBiZmVlNmU3YWJmY2FkYTYwODM3ZDVlMTEyNTgzZDQ3MGE1MzMyMDc2M2U0ZDhjNjIwODVjNGQxOTA2YmIzYv0+8qA=: 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTBiZmVlNmU3YWJmY2FkYTYwODM3ZDVlMTEyNTgzZDQ3MGE1MzMyMDc2M2U0ZDhjNjIwODVjNGQxOTA2YmIzYv0+8qA=: 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.245 15:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.163 nvme0n1 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDc2OTYwNjgwN2Y4MGU4NjZhOWVjYjA0YjhjNTFlMzRpJt9J: 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDc2OTYwNjgwN2Y4MGU4NjZhOWVjYjA0YjhjNTFlMzRpJt9J: 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: ]] 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.163 nvme0n1 00:32:11.163 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: ]] 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.164 15:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.424 nvme0n1 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: ]] 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.424 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.738 nvme0n1 00:32:11.738 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.738 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.738 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.738 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.738 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.738 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.738 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.738 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.738 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.738 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.738 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.738 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.738 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:11.738 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.738 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:11.738 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:11.738 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:11.738 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2YyYTE1ZmYzZGY0MzM5MDFlZGRlNWE4MzMwNDNhNDI0MTU1YThkMWVmZjAyZTIzuE8o7g==: 00:32:11.738 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: 00:32:11.738 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:11.738 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:11.738 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2YyYTE1ZmYzZGY0MzM5MDFlZGRlNWE4MzMwNDNhNDI0MTU1YThkMWVmZjAyZTIzuE8o7g==: 00:32:11.738 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: ]] 00:32:11.738 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: 00:32:11.738 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:11.738 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.738 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:11.738 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:11.738 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:11.739 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.739 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:11.739 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.739 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.739 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.739 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.739 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:11.739 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:11.739 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:11.739 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.739 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.739 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:11.739 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.739 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:11.739 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:11.739 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:11.739 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:11.739 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.739 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.998 nvme0n1 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTBiZmVlNmU3YWJmY2FkYTYwODM3ZDVlMTEyNTgzZDQ3MGE1MzMyMDc2M2U0ZDhjNjIwODVjNGQxOTA2YmIzYv0+8qA=: 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTBiZmVlNmU3YWJmY2FkYTYwODM3ZDVlMTEyNTgzZDQ3MGE1MzMyMDc2M2U0ZDhjNjIwODVjNGQxOTA2YmIzYv0+8qA=: 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.998 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.257 nvme0n1 00:32:12.257 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.257 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.257 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.257 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.257 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.257 15:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDc2OTYwNjgwN2Y4MGU4NjZhOWVjYjA0YjhjNTFlMzRpJt9J: 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDc2OTYwNjgwN2Y4MGU4NjZhOWVjYjA0YjhjNTFlMzRpJt9J: 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: ]] 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.257 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.517 nvme0n1 00:32:12.517 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.517 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.517 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.517 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.517 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.517 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.775 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.775 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.775 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.775 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.775 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.775 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.775 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: ]] 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.776 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.034 nvme0n1 00:32:13.034 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.034 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.034 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: ]] 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.035 15:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.294 nvme0n1 00:32:13.294 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.294 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.294 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.294 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.294 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.294 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.294 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.294 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.294 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.294 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2YyYTE1ZmYzZGY0MzM5MDFlZGRlNWE4MzMwNDNhNDI0MTU1YThkMWVmZjAyZTIzuE8o7g==: 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2YyYTE1ZmYzZGY0MzM5MDFlZGRlNWE4MzMwNDNhNDI0MTU1YThkMWVmZjAyZTIzuE8o7g==: 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: ]] 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.555 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.815 nvme0n1 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTBiZmVlNmU3YWJmY2FkYTYwODM3ZDVlMTEyNTgzZDQ3MGE1MzMyMDc2M2U0ZDhjNjIwODVjNGQxOTA2YmIzYv0+8qA=: 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTBiZmVlNmU3YWJmY2FkYTYwODM3ZDVlMTEyNTgzZDQ3MGE1MzMyMDc2M2U0ZDhjNjIwODVjNGQxOTA2YmIzYv0+8qA=: 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.815 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.076 nvme0n1 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDc2OTYwNjgwN2Y4MGU4NjZhOWVjYjA0YjhjNTFlMzRpJt9J: 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDc2OTYwNjgwN2Y4MGU4NjZhOWVjYjA0YjhjNTFlMzRpJt9J: 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: ]] 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:14.076 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:14.077 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:14.077 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.077 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.077 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:14.077 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.077 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:14.077 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:14.077 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:14.337 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:14.337 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.337 15:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.597 nvme0n1 00:32:14.597 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.597 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.597 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.597 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.597 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.597 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.857 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.857 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.857 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.857 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.857 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.857 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.857 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:14.857 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.857 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:14.857 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:14.857 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:14.857 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:32:14.857 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:32:14.857 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:14.857 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:14.857 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:32:14.857 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: ]] 00:32:14.857 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:32:14.857 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:14.857 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.857 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:14.857 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:14.857 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:14.857 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.857 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:14.857 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.857 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.857 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.857 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.857 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:14.858 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:14.858 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:14.858 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.858 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.858 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:14.858 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.858 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:14.858 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:14.858 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:14.858 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:14.858 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.858 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.117 nvme0n1 00:32:15.117 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.117 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.117 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.117 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.117 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:15.117 15:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: ]] 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.378 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.638 nvme0n1 00:32:15.638 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.638 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:15.638 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.638 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.638 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.638 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.638 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.638 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.638 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.638 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.638 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.638 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:15.638 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:15.638 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.638 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:15.638 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:15.638 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:15.638 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2YyYTE1ZmYzZGY0MzM5MDFlZGRlNWE4MzMwNDNhNDI0MTU1YThkMWVmZjAyZTIzuE8o7g==: 00:32:15.638 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: 00:32:15.639 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:15.639 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:15.639 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2YyYTE1ZmYzZGY0MzM5MDFlZGRlNWE4MzMwNDNhNDI0MTU1YThkMWVmZjAyZTIzuE8o7g==: 00:32:15.639 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: ]] 00:32:15.639 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: 00:32:15.639 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:15.639 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:15.639 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:15.639 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:15.639 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:15.639 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.639 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:15.639 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.639 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.639 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.639 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.639 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:15.639 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:15.639 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:15.639 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.639 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.639 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:15.639 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.639 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:15.639 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:15.639 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:15.639 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:15.639 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.639 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.210 nvme0n1 00:32:16.210 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.210 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.210 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.210 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.210 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.210 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.210 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.210 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.210 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.210 15:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTBiZmVlNmU3YWJmY2FkYTYwODM3ZDVlMTEyNTgzZDQ3MGE1MzMyMDc2M2U0ZDhjNjIwODVjNGQxOTA2YmIzYv0+8qA=: 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTBiZmVlNmU3YWJmY2FkYTYwODM3ZDVlMTEyNTgzZDQ3MGE1MzMyMDc2M2U0ZDhjNjIwODVjNGQxOTA2YmIzYv0+8qA=: 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.210 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.782 nvme0n1 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDc2OTYwNjgwN2Y4MGU4NjZhOWVjYjA0YjhjNTFlMzRpJt9J: 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDc2OTYwNjgwN2Y4MGU4NjZhOWVjYjA0YjhjNTFlMzRpJt9J: 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: ]] 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.782 15:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.723 nvme0n1 00:32:17.723 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.723 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.723 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:17.723 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.723 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.723 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: ]] 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.984 15:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.947 nvme0n1 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: ]] 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.947 15:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.335 nvme0n1 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2YyYTE1ZmYzZGY0MzM5MDFlZGRlNWE4MzMwNDNhNDI0MTU1YThkMWVmZjAyZTIzuE8o7g==: 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2YyYTE1ZmYzZGY0MzM5MDFlZGRlNWE4MzMwNDNhNDI0MTU1YThkMWVmZjAyZTIzuE8o7g==: 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: ]] 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.335 15:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.276 nvme0n1 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTBiZmVlNmU3YWJmY2FkYTYwODM3ZDVlMTEyNTgzZDQ3MGE1MzMyMDc2M2U0ZDhjNjIwODVjNGQxOTA2YmIzYv0+8qA=: 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTBiZmVlNmU3YWJmY2FkYTYwODM3ZDVlMTEyNTgzZDQ3MGE1MzMyMDc2M2U0ZDhjNjIwODVjNGQxOTA2YmIzYv0+8qA=: 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.276 15:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.217 nvme0n1 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDc2OTYwNjgwN2Y4MGU4NjZhOWVjYjA0YjhjNTFlMzRpJt9J: 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDc2OTYwNjgwN2Y4MGU4NjZhOWVjYjA0YjhjNTFlMzRpJt9J: 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: ]] 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.217 15:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.131 nvme0n1 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: ]] 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.131 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.132 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.132 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.132 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:24.132 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:24.132 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:24.132 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.132 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.132 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:24.132 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.132 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:24.132 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:24.132 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:24.132 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:24.132 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.132 15:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.044 nvme0n1 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: ]] 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.044 15:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.958 nvme0n1 00:32:27.958 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.958 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.958 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.958 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.958 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.958 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.958 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.958 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.958 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.958 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.958 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.958 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.958 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:27.958 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.958 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:27.958 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:27.958 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:27.959 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2YyYTE1ZmYzZGY0MzM5MDFlZGRlNWE4MzMwNDNhNDI0MTU1YThkMWVmZjAyZTIzuE8o7g==: 00:32:27.959 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: 00:32:27.959 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:27.959 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:27.959 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2YyYTE1ZmYzZGY0MzM5MDFlZGRlNWE4MzMwNDNhNDI0MTU1YThkMWVmZjAyZTIzuE8o7g==: 00:32:27.959 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: ]] 00:32:27.959 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: 00:32:27.959 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:27.959 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.959 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:27.959 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:27.959 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:27.959 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.959 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:27.959 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.959 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.959 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.959 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.959 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:27.959 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:27.959 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:27.959 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.959 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.959 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:27.959 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.959 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:27.959 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:27.959 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:27.959 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:27.959 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.959 15:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.872 nvme0n1 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTBiZmVlNmU3YWJmY2FkYTYwODM3ZDVlMTEyNTgzZDQ3MGE1MzMyMDc2M2U0ZDhjNjIwODVjNGQxOTA2YmIzYv0+8qA=: 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTBiZmVlNmU3YWJmY2FkYTYwODM3ZDVlMTEyNTgzZDQ3MGE1MzMyMDc2M2U0ZDhjNjIwODVjNGQxOTA2YmIzYv0+8qA=: 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.872 15:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.783 nvme0n1 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDc2OTYwNjgwN2Y4MGU4NjZhOWVjYjA0YjhjNTFlMzRpJt9J: 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDc2OTYwNjgwN2Y4MGU4NjZhOWVjYjA0YjhjNTFlMzRpJt9J: 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: ]] 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:31.783 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:31.784 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.784 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.784 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:31.784 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.784 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:31.784 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:31.784 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:31.784 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:31.784 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.784 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.044 nvme0n1 00:32:32.044 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.044 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.044 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.044 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.044 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.044 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.044 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.044 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.044 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.044 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.044 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.044 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.044 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:32.044 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.044 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:32.044 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:32.044 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:32.044 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:32:32.044 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:32:32.044 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:32.044 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:32.045 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:32:32.045 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: ]] 00:32:32.045 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:32:32.045 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:32.045 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.045 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:32.045 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:32.045 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:32.045 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.045 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:32.045 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.045 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.045 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.045 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.045 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:32.045 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:32.045 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:32.045 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.045 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.045 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:32.045 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.045 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:32.045 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:32.045 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:32.045 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:32.045 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.045 15:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.305 nvme0n1 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: ]] 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.305 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.566 nvme0n1 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2YyYTE1ZmYzZGY0MzM5MDFlZGRlNWE4MzMwNDNhNDI0MTU1YThkMWVmZjAyZTIzuE8o7g==: 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2YyYTE1ZmYzZGY0MzM5MDFlZGRlNWE4MzMwNDNhNDI0MTU1YThkMWVmZjAyZTIzuE8o7g==: 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: ]] 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.566 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.827 nvme0n1 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTBiZmVlNmU3YWJmY2FkYTYwODM3ZDVlMTEyNTgzZDQ3MGE1MzMyMDc2M2U0ZDhjNjIwODVjNGQxOTA2YmIzYv0+8qA=: 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTBiZmVlNmU3YWJmY2FkYTYwODM3ZDVlMTEyNTgzZDQ3MGE1MzMyMDc2M2U0ZDhjNjIwODVjNGQxOTA2YmIzYv0+8qA=: 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:32.827 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:32.828 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:32.828 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:32.828 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.828 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.088 nvme0n1 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDc2OTYwNjgwN2Y4MGU4NjZhOWVjYjA0YjhjNTFlMzRpJt9J: 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDc2OTYwNjgwN2Y4MGU4NjZhOWVjYjA0YjhjNTFlMzRpJt9J: 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: ]] 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.088 15:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.348 nvme0n1 00:32:33.348 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.348 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.348 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.348 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.348 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.348 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.348 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.348 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.348 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.348 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: ]] 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.609 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.870 nvme0n1 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: ]] 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.870 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.132 nvme0n1 00:32:34.132 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.132 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.132 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.132 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.132 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.132 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.132 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.132 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.132 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.132 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.132 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.132 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.132 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:34.132 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.132 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:34.132 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:34.132 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:34.132 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2YyYTE1ZmYzZGY0MzM5MDFlZGRlNWE4MzMwNDNhNDI0MTU1YThkMWVmZjAyZTIzuE8o7g==: 00:32:34.132 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: 00:32:34.132 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:34.132 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2YyYTE1ZmYzZGY0MzM5MDFlZGRlNWE4MzMwNDNhNDI0MTU1YThkMWVmZjAyZTIzuE8o7g==: 00:32:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: ]] 00:32:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: 00:32:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:32:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.133 15:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.394 nvme0n1 00:32:34.394 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.394 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.394 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.394 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.394 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.394 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.655 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.655 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.655 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.655 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.655 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTBiZmVlNmU3YWJmY2FkYTYwODM3ZDVlMTEyNTgzZDQ3MGE1MzMyMDc2M2U0ZDhjNjIwODVjNGQxOTA2YmIzYv0+8qA=: 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTBiZmVlNmU3YWJmY2FkYTYwODM3ZDVlMTEyNTgzZDQ3MGE1MzMyMDc2M2U0ZDhjNjIwODVjNGQxOTA2YmIzYv0+8qA=: 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.656 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.918 nvme0n1 00:32:34.918 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.918 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.918 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.918 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.918 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.918 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.918 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.918 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.918 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.918 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.918 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDc2OTYwNjgwN2Y4MGU4NjZhOWVjYjA0YjhjNTFlMzRpJt9J: 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDc2OTYwNjgwN2Y4MGU4NjZhOWVjYjA0YjhjNTFlMzRpJt9J: 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: ]] 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.919 15:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.492 nvme0n1 00:32:35.492 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.492 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.492 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.492 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.492 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.492 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.492 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.492 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.492 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.492 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.492 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.492 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.492 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:35.492 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.492 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:35.492 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:35.492 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:35.492 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:32:35.492 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:32:35.492 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:35.493 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:35.493 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:32:35.493 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: ]] 00:32:35.493 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:32:35.493 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:32:35.493 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.493 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:35.493 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:35.493 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:35.493 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.493 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:35.493 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.493 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.493 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.493 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.493 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:35.493 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:35.493 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:35.493 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.493 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.493 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:35.493 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.493 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:35.493 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:35.493 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:35.493 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:35.493 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.493 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.066 nvme0n1 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: ]] 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:36.066 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.067 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.067 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:36.067 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.067 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:36.067 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:36.067 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:36.067 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:36.067 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.067 15:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.328 nvme0n1 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2YyYTE1ZmYzZGY0MzM5MDFlZGRlNWE4MzMwNDNhNDI0MTU1YThkMWVmZjAyZTIzuE8o7g==: 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2YyYTE1ZmYzZGY0MzM5MDFlZGRlNWE4MzMwNDNhNDI0MTU1YThkMWVmZjAyZTIzuE8o7g==: 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: ]] 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.328 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.589 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.589 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.589 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:36.589 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:36.589 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:36.589 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.589 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.589 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:36.589 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.589 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:36.589 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:36.589 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:36.589 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:36.589 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.589 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.851 nvme0n1 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTBiZmVlNmU3YWJmY2FkYTYwODM3ZDVlMTEyNTgzZDQ3MGE1MzMyMDc2M2U0ZDhjNjIwODVjNGQxOTA2YmIzYv0+8qA=: 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTBiZmVlNmU3YWJmY2FkYTYwODM3ZDVlMTEyNTgzZDQ3MGE1MzMyMDc2M2U0ZDhjNjIwODVjNGQxOTA2YmIzYv0+8qA=: 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.851 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.112 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.113 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.113 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:37.113 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:37.113 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:37.113 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.113 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.113 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:37.113 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.113 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:37.113 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:37.113 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:37.113 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:37.113 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.113 15:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.375 nvme0n1 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDc2OTYwNjgwN2Y4MGU4NjZhOWVjYjA0YjhjNTFlMzRpJt9J: 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDc2OTYwNjgwN2Y4MGU4NjZhOWVjYjA0YjhjNTFlMzRpJt9J: 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: ]] 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.375 15:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.786 nvme0n1 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: ]] 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.786 15:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.727 nvme0n1 00:32:39.727 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.727 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.727 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.727 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.727 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.727 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.727 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.727 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.727 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.727 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.727 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.727 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.727 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:32:39.727 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.727 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:39.727 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:39.727 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:39.727 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:32:39.728 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:32:39.728 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:39.728 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:39.728 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:32:39.728 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: ]] 00:32:39.728 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:32:39.728 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:32:39.728 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.728 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:39.728 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:39.728 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:39.728 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.728 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:39.728 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.728 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.728 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.728 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.728 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:39.728 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:39.728 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:39.728 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.728 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.728 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:39.728 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.728 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:39.728 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:39.728 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:39.728 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:39.728 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.728 15:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.667 nvme0n1 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2YyYTE1ZmYzZGY0MzM5MDFlZGRlNWE4MzMwNDNhNDI0MTU1YThkMWVmZjAyZTIzuE8o7g==: 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2YyYTE1ZmYzZGY0MzM5MDFlZGRlNWE4MzMwNDNhNDI0MTU1YThkMWVmZjAyZTIzuE8o7g==: 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: ]] 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.667 15:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.609 nvme0n1 00:32:41.609 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.609 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.609 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.609 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.609 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.609 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.609 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTBiZmVlNmU3YWJmY2FkYTYwODM3ZDVlMTEyNTgzZDQ3MGE1MzMyMDc2M2U0ZDhjNjIwODVjNGQxOTA2YmIzYv0+8qA=: 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTBiZmVlNmU3YWJmY2FkYTYwODM3ZDVlMTEyNTgzZDQ3MGE1MzMyMDc2M2U0ZDhjNjIwODVjNGQxOTA2YmIzYv0+8qA=: 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.610 15:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.552 nvme0n1 00:32:42.552 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.552 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.552 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.552 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.552 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.552 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.813 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.813 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.813 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.813 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.813 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.813 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:42.813 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.813 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:32:42.813 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.813 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:42.813 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:42.813 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:42.813 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDc2OTYwNjgwN2Y4MGU4NjZhOWVjYjA0YjhjNTFlMzRpJt9J: 00:32:42.813 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: 00:32:42.813 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:42.813 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:42.813 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDc2OTYwNjgwN2Y4MGU4NjZhOWVjYjA0YjhjNTFlMzRpJt9J: 00:32:42.813 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: ]] 00:32:42.813 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGExZTM3YWYxNTI0YTBiY2ViM2RiZmIxNDE0ZTQxMDkwMDI0OGZiZjdlNmQ3NzhhNGVlY2VhOGE2ZTBkYmVmZbDktxM=: 00:32:42.813 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:32:42.813 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.814 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:42.814 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:42.814 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:42.814 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.814 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:42.814 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.814 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.814 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.814 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.814 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:42.814 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:42.814 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:42.814 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.814 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.814 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:42.814 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.814 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:42.814 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:42.814 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:42.814 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:42.814 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.814 15:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.727 nvme0n1 00:32:44.727 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.727 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.727 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.727 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.727 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.727 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.727 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.727 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.727 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.727 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.727 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.727 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.727 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:32:44.727 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.727 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:44.727 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:44.727 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:44.727 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:32:44.728 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:32:44.728 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:44.728 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:44.728 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:32:44.728 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: ]] 00:32:44.728 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:32:44.728 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:32:44.728 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.728 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:44.728 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:44.728 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:44.728 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.728 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:44.728 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.728 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.728 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.728 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.728 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:44.728 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:44.728 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:44.728 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.728 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.728 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:44.728 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.728 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:44.728 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:44.728 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:44.728 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:44.728 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.728 15:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.641 nvme0n1 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: ]] 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.641 15:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.588 nvme0n1 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2YyYTE1ZmYzZGY0MzM5MDFlZGRlNWE4MzMwNDNhNDI0MTU1YThkMWVmZjAyZTIzuE8o7g==: 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2YyYTE1ZmYzZGY0MzM5MDFlZGRlNWE4MzMwNDNhNDI0MTU1YThkMWVmZjAyZTIzuE8o7g==: 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: ]] 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDBiZTFkOTBjNjEyZTBlZTA4MDI5ZTFmNjU0ZmM3YTeVRvb5: 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.588 15:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.499 nvme0n1 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTBiZmVlNmU3YWJmY2FkYTYwODM3ZDVlMTEyNTgzZDQ3MGE1MzMyMDc2M2U0ZDhjNjIwODVjNGQxOTA2YmIzYv0+8qA=: 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTBiZmVlNmU3YWJmY2FkYTYwODM3ZDVlMTEyNTgzZDQ3MGE1MzMyMDc2M2U0ZDhjNjIwODVjNGQxOTA2YmIzYv0+8qA=: 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:50.499 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:50.500 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.500 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.500 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:50.500 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.500 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:50.500 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:50.500 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:50.500 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:50.500 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.500 15:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.409 nvme0n1 00:32:52.409 15:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.409 15:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.409 15:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.409 15:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.409 15:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.409 15:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: ]] 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.410 request: 00:32:52.410 { 00:32:52.410 "name": "nvme0", 00:32:52.410 "trtype": "tcp", 00:32:52.410 "traddr": "10.0.0.1", 00:32:52.410 "adrfam": "ipv4", 00:32:52.410 "trsvcid": "4420", 00:32:52.410 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:52.410 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:52.410 "prchk_reftag": false, 00:32:52.410 "prchk_guard": false, 00:32:52.410 "hdgst": false, 00:32:52.410 "ddgst": false, 00:32:52.410 "allow_unrecognized_csi": false, 00:32:52.410 "method": "bdev_nvme_attach_controller", 00:32:52.410 "req_id": 1 00:32:52.410 } 00:32:52.410 Got JSON-RPC error response 00:32:52.410 response: 00:32:52.410 { 00:32:52.410 "code": -5, 00:32:52.410 "message": "Input/output error" 00:32:52.410 } 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.410 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.671 request: 00:32:52.671 { 00:32:52.671 "name": "nvme0", 00:32:52.671 "trtype": "tcp", 00:32:52.671 "traddr": "10.0.0.1", 00:32:52.671 "adrfam": "ipv4", 00:32:52.671 "trsvcid": "4420", 00:32:52.671 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:52.671 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:52.671 "prchk_reftag": false, 00:32:52.671 "prchk_guard": false, 00:32:52.671 "hdgst": false, 00:32:52.671 "ddgst": false, 00:32:52.671 "dhchap_key": "key2", 00:32:52.671 "allow_unrecognized_csi": false, 00:32:52.671 "method": "bdev_nvme_attach_controller", 00:32:52.671 "req_id": 1 00:32:52.671 } 00:32:52.671 Got JSON-RPC error response 00:32:52.671 response: 00:32:52.671 { 00:32:52.671 "code": -5, 00:32:52.671 "message": "Input/output error" 00:32:52.671 } 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.671 request: 00:32:52.671 { 00:32:52.671 "name": "nvme0", 00:32:52.671 "trtype": "tcp", 00:32:52.671 "traddr": "10.0.0.1", 00:32:52.671 "adrfam": "ipv4", 00:32:52.671 "trsvcid": "4420", 00:32:52.671 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:52.671 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:52.671 "prchk_reftag": false, 00:32:52.671 "prchk_guard": false, 00:32:52.671 "hdgst": false, 00:32:52.671 "ddgst": false, 00:32:52.671 "dhchap_key": "key1", 00:32:52.671 "dhchap_ctrlr_key": "ckey2", 00:32:52.671 "allow_unrecognized_csi": false, 00:32:52.671 "method": "bdev_nvme_attach_controller", 00:32:52.671 "req_id": 1 00:32:52.671 } 00:32:52.671 Got JSON-RPC error response 00:32:52.671 response: 00:32:52.671 { 00:32:52.671 "code": -5, 00:32:52.671 "message": "Input/output error" 00:32:52.671 } 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.671 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:52.672 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:52.672 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:52.672 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:32:52.672 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.672 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.932 nvme0n1 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: ]] 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.932 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.193 request: 00:32:53.193 { 00:32:53.193 "name": "nvme0", 00:32:53.193 "dhchap_key": "key1", 00:32:53.193 "dhchap_ctrlr_key": "ckey2", 00:32:53.193 "method": "bdev_nvme_set_keys", 00:32:53.193 "req_id": 1 00:32:53.193 } 00:32:53.193 Got JSON-RPC error response 00:32:53.193 response: 00:32:53.193 { 00:32:53.193 "code": -13, 00:32:53.193 "message": "Permission denied" 00:32:53.193 } 00:32:53.193 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:53.193 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:53.193 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:53.193 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:53.193 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:53.193 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.193 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.193 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.193 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:32:53.193 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.193 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:32:53.193 15:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:32:54.133 15:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.133 15:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:32:54.133 15:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.133 15:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.133 15:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.133 15:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:32:54.133 15:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:32:55.513 15:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.513 15:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:32:55.513 15:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.513 15:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.513 15:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.513 15:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:32:55.513 15:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:55.513 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.513 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:55.513 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:55.513 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:55.513 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:32:55.513 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:32:55.513 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2IzNjU4ZDhkODk2MzAyODA4ZWQ3OTUyOTgwNTE5NmNjYTVkM2VlNzc2ZjY3YzVjTAUtgA==: 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: ]] 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTY0ZThhMzJkNGNkNzQ5ODBlYmY3MGQ1MmQxN2RkZDM1NDFkNDgwMGNiMGMwYWUzLfrZrQ==: 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.514 nvme0n1 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMmVhN2U2NmE2NDQ4YWY5ZTliOTlhYzZlOTg0NzRgIDBs: 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: ]] 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTAxZGE3MTM5YjY5NjZhYjRlY2I4Zjg3NWZlYmM0NWZyAT3W: 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.514 request: 00:32:55.514 { 00:32:55.514 "name": "nvme0", 00:32:55.514 "dhchap_key": "key2", 00:32:55.514 "dhchap_ctrlr_key": "ckey1", 00:32:55.514 "method": "bdev_nvme_set_keys", 00:32:55.514 "req_id": 1 00:32:55.514 } 00:32:55.514 Got JSON-RPC error response 00:32:55.514 response: 00:32:55.514 { 00:32:55.514 "code": -13, 00:32:55.514 "message": "Permission denied" 00:32:55.514 } 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:32:55.514 15:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:32:56.453 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.453 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:32:56.453 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.453 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.713 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.713 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:32:56.713 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:32:56.713 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:32:56.713 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:32:56.713 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:56.713 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:32:56.713 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:56.713 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:32:56.713 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:56.713 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:56.713 rmmod nvme_tcp 00:32:56.713 rmmod nvme_fabrics 00:32:56.713 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:56.713 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:32:56.713 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:32:56.713 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3294668 ']' 00:32:56.713 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3294668 00:32:56.713 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 3294668 ']' 00:32:56.713 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 3294668 00:32:56.713 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:32:56.713 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:56.713 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3294668 00:32:56.713 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:56.713 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:56.713 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3294668' 00:32:56.713 killing process with pid 3294668 00:32:56.713 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 3294668 00:32:56.713 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 3294668 00:32:56.972 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:56.972 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:56.972 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:56.972 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:32:56.972 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:32:56.972 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:56.972 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:32:57.232 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:57.232 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:57.232 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:57.232 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:57.232 15:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.135 15:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:59.135 15:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:59.135 15:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:59.135 15:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:32:59.135 15:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:59.135 15:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:32:59.135 15:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:59.135 15:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:59.135 15:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:59.135 15:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:59.135 15:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:32:59.135 15:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:32:59.135 15:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:01.043 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:01.043 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:01.043 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:01.043 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:01.043 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:01.043 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:01.043 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:01.043 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:01.043 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:01.043 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:01.043 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:01.043 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:01.043 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:01.043 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:01.043 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:01.043 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:01.981 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:33:01.981 15:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.3sO /tmp/spdk.key-null.2dM /tmp/spdk.key-sha256.kvt /tmp/spdk.key-sha384.81O /tmp/spdk.key-sha512.gR0 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:01.981 15:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:03.354 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:03.354 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:03.354 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:03.354 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:03.354 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:03.354 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:03.354 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:03.354 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:03.354 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:03.613 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:03.613 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:03.613 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:03.613 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:03.613 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:03.613 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:03.613 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:03.613 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:03.613 00:33:03.613 real 1m22.314s 00:33:03.613 user 1m20.865s 00:33:03.613 sys 0m8.663s 00:33:03.613 15:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:03.613 15:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.613 ************************************ 00:33:03.613 END TEST nvmf_auth_host 00:33:03.613 ************************************ 00:33:03.613 15:28:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:33:03.613 15:28:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:03.613 15:28:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:03.613 15:28:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:03.613 15:28:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.873 ************************************ 00:33:03.873 START TEST nvmf_digest 00:33:03.873 ************************************ 00:33:03.873 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:03.873 * Looking for test storage... 00:33:03.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:03.873 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:33:03.873 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1689 -- # lcov --version 00:33:03.873 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:33:03.873 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:33:03.873 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:03.873 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:03.873 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:03.873 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:33:03.873 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:33:03.873 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:33:03.873 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:33:03.873 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:33:03.873 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:33:03.873 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:33:03.873 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:03.873 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:33:03.873 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:33:03.873 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:03.873 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:03.873 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:33:03.873 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:33:03.873 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:03.873 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:33:03.873 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:33:03.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.874 --rc genhtml_branch_coverage=1 00:33:03.874 --rc genhtml_function_coverage=1 00:33:03.874 --rc genhtml_legend=1 00:33:03.874 --rc geninfo_all_blocks=1 00:33:03.874 --rc geninfo_unexecuted_blocks=1 00:33:03.874 00:33:03.874 ' 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:33:03.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.874 --rc genhtml_branch_coverage=1 00:33:03.874 --rc genhtml_function_coverage=1 00:33:03.874 --rc genhtml_legend=1 00:33:03.874 --rc geninfo_all_blocks=1 00:33:03.874 --rc geninfo_unexecuted_blocks=1 00:33:03.874 00:33:03.874 ' 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:33:03.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.874 --rc genhtml_branch_coverage=1 00:33:03.874 --rc genhtml_function_coverage=1 00:33:03.874 --rc genhtml_legend=1 00:33:03.874 --rc geninfo_all_blocks=1 00:33:03.874 --rc geninfo_unexecuted_blocks=1 00:33:03.874 00:33:03.874 ' 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:33:03.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.874 --rc genhtml_branch_coverage=1 00:33:03.874 --rc genhtml_function_coverage=1 00:33:03.874 --rc genhtml_legend=1 00:33:03.874 --rc geninfo_all_blocks=1 00:33:03.874 --rc geninfo_unexecuted_blocks=1 00:33:03.874 00:33:03.874 ' 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:03.874 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:33:03.874 15:28:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:33:07.177 Found 0000:84:00.0 (0x8086 - 0x159b) 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:33:07.177 Found 0000:84:00.1 (0x8086 - 0x159b) 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:07.177 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:33:07.178 Found net devices under 0000:84:00.0: cvl_0_0 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:33:07.178 Found net devices under 0000:84:00.1: cvl_0_1 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:07.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:07.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:33:07.178 00:33:07.178 --- 10.0.0.2 ping statistics --- 00:33:07.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:07.178 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:07.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:07.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:33:07.178 00:33:07.178 --- 10.0.0.1 ping statistics --- 00:33:07.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:07.178 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:07.178 ************************************ 00:33:07.178 START TEST nvmf_digest_clean 00:33:07.178 ************************************ 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3307702 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3307702 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3307702 ']' 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:07.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:07.178 15:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:07.178 [2024-10-28 15:28:53.600181] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:33:07.178 [2024-10-28 15:28:53.600276] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:07.178 [2024-10-28 15:28:53.725588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:07.178 [2024-10-28 15:28:53.835822] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:07.178 [2024-10-28 15:28:53.835933] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:07.178 [2024-10-28 15:28:53.835977] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:07.178 [2024-10-28 15:28:53.836007] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:07.178 [2024-10-28 15:28:53.836033] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:07.178 [2024-10-28 15:28:53.837358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:07.438 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:07.438 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:33:07.438 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:07.438 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:07.438 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:07.438 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:07.438 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:07.438 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:07.438 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:07.438 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.438 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:07.698 null0 00:33:07.698 [2024-10-28 15:28:54.446404] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:07.698 [2024-10-28 15:28:54.470739] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:07.698 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.698 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:07.698 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:07.698 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:07.698 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:07.698 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:07.698 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:07.698 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:07.698 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3307735 00:33:07.698 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:07.698 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3307735 /var/tmp/bperf.sock 00:33:07.698 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3307735 ']' 00:33:07.698 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:07.698 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:07.698 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:07.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:07.698 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:07.698 15:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:07.957 [2024-10-28 15:28:54.571636] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:33:07.957 [2024-10-28 15:28:54.571777] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3307735 ] 00:33:07.957 [2024-10-28 15:28:54.722991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.216 [2024-10-28 15:28:54.831963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:09.154 15:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:09.154 15:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:33:09.154 15:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:09.154 15:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:09.154 15:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:09.722 15:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:09.722 15:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:10.290 nvme0n1 00:33:10.290 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:10.290 15:28:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:10.551 Running I/O for 2 seconds... 00:33:12.420 12818.00 IOPS, 50.07 MiB/s [2024-10-28T14:28:59.546Z] 11394.50 IOPS, 44.51 MiB/s 00:33:12.679 Latency(us) 00:33:12.679 [2024-10-28T14:28:59.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:12.679 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:12.679 nvme0n1 : 2.01 11387.88 44.48 0.00 0.00 11216.45 4587.52 35146.71 00:33:12.679 [2024-10-28T14:28:59.546Z] =================================================================================================================== 00:33:12.679 [2024-10-28T14:28:59.546Z] Total : 11387.88 44.48 0.00 0.00 11216.45 4587.52 35146.71 00:33:12.679 { 00:33:12.679 "results": [ 00:33:12.679 { 00:33:12.679 "job": "nvme0n1", 00:33:12.679 "core_mask": "0x2", 00:33:12.679 "workload": "randread", 00:33:12.679 "status": "finished", 00:33:12.679 "queue_depth": 128, 00:33:12.679 "io_size": 4096, 00:33:12.679 "runtime": 2.012402, 00:33:12.679 "iops": 11387.88373297184, 00:33:12.679 "mibps": 44.48392083192125, 00:33:12.679 "io_failed": 0, 00:33:12.679 "io_timeout": 0, 00:33:12.679 "avg_latency_us": 11216.452205268934, 00:33:12.679 "min_latency_us": 4587.52, 00:33:12.679 "max_latency_us": 35146.71407407407 00:33:12.679 } 00:33:12.679 ], 00:33:12.679 "core_count": 1 00:33:12.679 } 00:33:12.679 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:12.679 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:12.679 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:12.679 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:12.679 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:12.679 | select(.opcode=="crc32c") 00:33:12.679 | "\(.module_name) \(.executed)"' 00:33:12.941 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:12.941 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:12.941 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:12.941 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:12.941 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3307735 00:33:12.941 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3307735 ']' 00:33:12.941 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3307735 00:33:12.941 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:33:12.941 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:12.941 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3307735 00:33:12.941 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:12.941 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:12.941 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3307735' 00:33:12.941 killing process with pid 3307735 00:33:12.941 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3307735 00:33:12.941 Received shutdown signal, test time was about 2.000000 seconds 00:33:12.941 00:33:12.941 Latency(us) 00:33:12.941 [2024-10-28T14:28:59.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:12.941 [2024-10-28T14:28:59.808Z] =================================================================================================================== 00:33:12.941 [2024-10-28T14:28:59.808Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:12.941 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3307735 00:33:13.198 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:13.198 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:13.198 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:13.198 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:13.198 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:13.198 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:13.198 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:13.198 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3308405 00:33:13.198 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:13.198 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3308405 /var/tmp/bperf.sock 00:33:13.198 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3308405 ']' 00:33:13.198 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:13.198 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:13.198 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:13.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:13.198 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:13.198 15:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:13.198 [2024-10-28 15:29:00.054869] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:33:13.198 [2024-10-28 15:29:00.054990] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3308405 ] 00:33:13.198 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:13.198 Zero copy mechanism will not be used. 00:33:13.455 [2024-10-28 15:29:00.149285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.455 [2024-10-28 15:29:00.226863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:13.712 15:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:13.712 15:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:33:13.712 15:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:13.712 15:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:13.712 15:29:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:14.290 15:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:14.290 15:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:15.223 nvme0n1 00:33:15.223 15:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:15.223 15:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:15.481 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:15.481 Zero copy mechanism will not be used. 00:33:15.481 Running I/O for 2 seconds... 00:33:17.443 3896.00 IOPS, 487.00 MiB/s [2024-10-28T14:29:04.310Z] 3582.00 IOPS, 447.75 MiB/s 00:33:17.443 Latency(us) 00:33:17.443 [2024-10-28T14:29:04.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:17.443 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:17.443 nvme0n1 : 2.01 3580.29 447.54 0.00 0.00 4461.68 964.84 10874.12 00:33:17.443 [2024-10-28T14:29:04.310Z] =================================================================================================================== 00:33:17.443 [2024-10-28T14:29:04.310Z] Total : 3580.29 447.54 0.00 0.00 4461.68 964.84 10874.12 00:33:17.443 { 00:33:17.443 "results": [ 00:33:17.443 { 00:33:17.443 "job": "nvme0n1", 00:33:17.443 "core_mask": "0x2", 00:33:17.443 "workload": "randread", 00:33:17.443 "status": "finished", 00:33:17.443 "queue_depth": 16, 00:33:17.443 "io_size": 131072, 00:33:17.443 "runtime": 2.005425, 00:33:17.443 "iops": 3580.28846753182, 00:33:17.443 "mibps": 447.5360584414775, 00:33:17.443 "io_failed": 0, 00:33:17.443 "io_timeout": 0, 00:33:17.443 "avg_latency_us": 4461.6849470752095, 00:33:17.443 "min_latency_us": 964.8355555555555, 00:33:17.443 "max_latency_us": 10874.121481481481 00:33:17.443 } 00:33:17.443 ], 00:33:17.443 "core_count": 1 00:33:17.443 } 00:33:17.443 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:17.443 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:17.443 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:17.443 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:17.443 | select(.opcode=="crc32c") 00:33:17.443 | "\(.module_name) \(.executed)"' 00:33:17.443 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:18.012 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:18.012 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:18.012 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:18.012 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:18.012 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3308405 00:33:18.012 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3308405 ']' 00:33:18.012 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3308405 00:33:18.012 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:33:18.012 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:18.012 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3308405 00:33:18.012 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:18.012 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:18.012 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3308405' 00:33:18.012 killing process with pid 3308405 00:33:18.012 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3308405 00:33:18.012 Received shutdown signal, test time was about 2.000000 seconds 00:33:18.012 00:33:18.012 Latency(us) 00:33:18.012 [2024-10-28T14:29:04.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:18.012 [2024-10-28T14:29:04.879Z] =================================================================================================================== 00:33:18.012 [2024-10-28T14:29:04.879Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:18.012 15:29:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3308405 00:33:18.271 15:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:18.271 15:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:18.271 15:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:18.271 15:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:18.271 15:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:18.271 15:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:18.271 15:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:18.271 15:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3308959 00:33:18.271 15:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3308959 /var/tmp/bperf.sock 00:33:18.271 15:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:18.271 15:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3308959 ']' 00:33:18.271 15:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:18.271 15:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:18.271 15:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:18.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:18.271 15:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:18.271 15:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:18.271 [2024-10-28 15:29:05.129456] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:33:18.271 [2024-10-28 15:29:05.129580] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3308959 ] 00:33:18.530 [2024-10-28 15:29:05.218732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.530 [2024-10-28 15:29:05.295377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:18.788 15:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:18.788 15:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:33:18.788 15:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:18.788 15:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:18.788 15:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:19.360 15:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:19.360 15:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:20.299 nvme0n1 00:33:20.299 15:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:20.299 15:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:20.299 Running I/O for 2 seconds... 00:33:22.613 16042.00 IOPS, 62.66 MiB/s [2024-10-28T14:29:09.480Z] 12714.00 IOPS, 49.66 MiB/s 00:33:22.613 Latency(us) 00:33:22.613 [2024-10-28T14:29:09.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:22.613 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:22.613 nvme0n1 : 2.02 12644.24 49.39 0.00 0.00 10098.62 3689.43 21554.06 00:33:22.613 [2024-10-28T14:29:09.480Z] =================================================================================================================== 00:33:22.613 [2024-10-28T14:29:09.480Z] Total : 12644.24 49.39 0.00 0.00 10098.62 3689.43 21554.06 00:33:22.613 { 00:33:22.613 "results": [ 00:33:22.613 { 00:33:22.613 "job": "nvme0n1", 00:33:22.613 "core_mask": "0x2", 00:33:22.613 "workload": "randwrite", 00:33:22.613 "status": "finished", 00:33:22.613 "queue_depth": 128, 00:33:22.613 "io_size": 4096, 00:33:22.613 "runtime": 2.021158, 00:33:22.613 "iops": 12644.236620788677, 00:33:22.613 "mibps": 49.39154929995577, 00:33:22.613 "io_failed": 0, 00:33:22.613 "io_timeout": 0, 00:33:22.613 "avg_latency_us": 10098.623260580976, 00:33:22.613 "min_latency_us": 3689.434074074074, 00:33:22.613 "max_latency_us": 21554.062222222223 00:33:22.613 } 00:33:22.613 ], 00:33:22.613 "core_count": 1 00:33:22.613 } 00:33:22.613 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:22.613 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:22.613 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:22.613 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:22.613 | select(.opcode=="crc32c") 00:33:22.613 | "\(.module_name) \(.executed)"' 00:33:22.613 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:22.871 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:22.871 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:22.871 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:22.871 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:22.871 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3308959 00:33:22.871 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3308959 ']' 00:33:22.871 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3308959 00:33:22.871 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:33:22.871 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:22.871 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3308959 00:33:22.871 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:22.871 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:22.871 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3308959' 00:33:22.871 killing process with pid 3308959 00:33:22.871 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3308959 00:33:22.871 Received shutdown signal, test time was about 2.000000 seconds 00:33:22.871 00:33:22.871 Latency(us) 00:33:22.871 [2024-10-28T14:29:09.738Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:22.871 [2024-10-28T14:29:09.738Z] =================================================================================================================== 00:33:22.871 [2024-10-28T14:29:09.738Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:22.871 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3308959 00:33:23.130 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:23.130 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:23.130 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:23.130 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:23.130 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:23.130 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:23.130 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:23.130 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3309594 00:33:23.130 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3309594 /var/tmp/bperf.sock 00:33:23.130 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:23.130 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3309594 ']' 00:33:23.130 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:23.130 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:23.130 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:23.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:23.130 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:23.130 15:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:23.388 [2024-10-28 15:29:10.043449] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:33:23.388 [2024-10-28 15:29:10.043552] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3309594 ] 00:33:23.388 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:23.388 Zero copy mechanism will not be used. 00:33:23.388 [2024-10-28 15:29:10.150889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:23.388 [2024-10-28 15:29:10.232211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:23.648 15:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:23.648 15:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:33:23.648 15:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:23.648 15:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:23.648 15:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:24.589 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:24.589 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:25.159 nvme0n1 00:33:25.159 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:25.159 15:29:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:25.420 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:25.420 Zero copy mechanism will not be used. 00:33:25.420 Running I/O for 2 seconds... 00:33:27.303 2443.00 IOPS, 305.38 MiB/s [2024-10-28T14:29:14.170Z] 2887.50 IOPS, 360.94 MiB/s 00:33:27.303 Latency(us) 00:33:27.303 [2024-10-28T14:29:14.170Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:27.303 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:27.303 nvme0n1 : 2.01 2888.11 361.01 0.00 0.00 5524.70 2852.03 8592.50 00:33:27.303 [2024-10-28T14:29:14.170Z] =================================================================================================================== 00:33:27.303 [2024-10-28T14:29:14.170Z] Total : 2888.11 361.01 0.00 0.00 5524.70 2852.03 8592.50 00:33:27.304 { 00:33:27.304 "results": [ 00:33:27.304 { 00:33:27.304 "job": "nvme0n1", 00:33:27.304 "core_mask": "0x2", 00:33:27.304 "workload": "randwrite", 00:33:27.304 "status": "finished", 00:33:27.304 "queue_depth": 16, 00:33:27.304 "io_size": 131072, 00:33:27.304 "runtime": 2.005119, 00:33:27.304 "iops": 2888.1078878610197, 00:33:27.304 "mibps": 361.01348598262746, 00:33:27.304 "io_failed": 0, 00:33:27.304 "io_timeout": 0, 00:33:27.304 "avg_latency_us": 5524.7046673957675, 00:33:27.304 "min_latency_us": 2852.0296296296297, 00:33:27.304 "max_latency_us": 8592.497777777779 00:33:27.304 } 00:33:27.304 ], 00:33:27.304 "core_count": 1 00:33:27.304 } 00:33:27.304 15:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:27.304 15:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:27.304 15:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:27.304 15:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:27.304 | select(.opcode=="crc32c") 00:33:27.304 | "\(.module_name) \(.executed)"' 00:33:27.304 15:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:27.562 15:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:27.562 15:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:27.562 15:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:27.562 15:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:27.562 15:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3309594 00:33:27.562 15:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3309594 ']' 00:33:27.562 15:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3309594 00:33:27.562 15:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:33:27.562 15:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:27.562 15:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3309594 00:33:27.821 15:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:27.821 15:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:27.821 15:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3309594' 00:33:27.821 killing process with pid 3309594 00:33:27.821 15:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3309594 00:33:27.821 Received shutdown signal, test time was about 2.000000 seconds 00:33:27.821 00:33:27.821 Latency(us) 00:33:27.821 [2024-10-28T14:29:14.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:27.821 [2024-10-28T14:29:14.688Z] =================================================================================================================== 00:33:27.821 [2024-10-28T14:29:14.688Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:27.821 15:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3309594 00:33:27.821 15:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3307702 00:33:27.821 15:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3307702 ']' 00:33:27.821 15:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3307702 00:33:27.821 15:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:33:27.821 15:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:28.079 15:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3307702 00:33:28.079 15:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:28.079 15:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:28.080 15:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3307702' 00:33:28.080 killing process with pid 3307702 00:33:28.080 15:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3307702 00:33:28.080 15:29:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3307702 00:33:28.339 00:33:28.339 real 0m21.607s 00:33:28.339 user 0m45.677s 00:33:28.339 sys 0m5.679s 00:33:28.339 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:28.339 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:28.339 ************************************ 00:33:28.339 END TEST nvmf_digest_clean 00:33:28.339 ************************************ 00:33:28.339 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:28.339 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:28.339 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:28.339 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:28.599 ************************************ 00:33:28.599 START TEST nvmf_digest_error 00:33:28.599 ************************************ 00:33:28.599 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:33:28.599 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:28.599 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:28.599 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:28.599 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:28.599 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3310168 00:33:28.599 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:28.599 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3310168 00:33:28.599 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3310168 ']' 00:33:28.599 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:28.599 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:28.599 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:28.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:28.599 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:28.599 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:28.599 [2024-10-28 15:29:15.288939] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:33:28.599 [2024-10-28 15:29:15.289045] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:28.599 [2024-10-28 15:29:15.427773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.858 [2024-10-28 15:29:15.531715] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:28.858 [2024-10-28 15:29:15.531828] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:28.858 [2024-10-28 15:29:15.531865] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:28.858 [2024-10-28 15:29:15.531897] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:28.858 [2024-10-28 15:29:15.531924] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:28.858 [2024-10-28 15:29:15.533259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.858 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:28.858 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:33:28.858 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:28.858 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:28.858 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:28.858 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:28.858 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:28.858 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.858 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:28.858 [2024-10-28 15:29:15.610305] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:28.858 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.858 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:28.858 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:28.858 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.858 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:29.119 null0 00:33:29.119 [2024-10-28 15:29:15.802345] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:29.119 [2024-10-28 15:29:15.826703] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:29.119 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.119 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:29.119 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:29.119 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:29.119 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:29.119 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:29.119 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3310310 00:33:29.119 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3310310 /var/tmp/bperf.sock 00:33:29.119 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:29.119 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3310310 ']' 00:33:29.119 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:29.119 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:29.119 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:29.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:29.119 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:29.119 15:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:29.119 [2024-10-28 15:29:15.937574] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:33:29.119 [2024-10-28 15:29:15.937753] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3310310 ] 00:33:29.379 [2024-10-28 15:29:16.107939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:29.379 [2024-10-28 15:29:16.217924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:30.317 15:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:30.317 15:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:33:30.317 15:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:30.317 15:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:30.885 15:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:30.885 15:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.885 15:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:30.885 15:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.885 15:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:30.885 15:29:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:31.480 nvme0n1 00:33:31.480 15:29:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:31.480 15:29:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.480 15:29:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:31.480 15:29:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.480 15:29:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:31.480 15:29:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:31.480 Running I/O for 2 seconds... 00:33:31.480 [2024-10-28 15:29:18.266280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:31.480 [2024-10-28 15:29:18.266389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.480 [2024-10-28 15:29:18.266450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.480 [2024-10-28 15:29:18.289408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:31.480 [2024-10-28 15:29:18.289493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.480 [2024-10-28 15:29:18.289540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.480 [2024-10-28 15:29:18.316931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:31.480 [2024-10-28 15:29:18.316998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.480 [2024-10-28 15:29:18.317044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.480 [2024-10-28 15:29:18.345060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:31.480 [2024-10-28 15:29:18.345147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.480 [2024-10-28 15:29:18.345194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.739 [2024-10-28 15:29:18.375892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:31.739 [2024-10-28 15:29:18.376001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.739 [2024-10-28 15:29:18.376048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.739 [2024-10-28 15:29:18.407752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:31.739 [2024-10-28 15:29:18.407789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.739 [2024-10-28 15:29:18.407808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.739 [2024-10-28 15:29:18.435342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:31.739 [2024-10-28 15:29:18.435423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.739 [2024-10-28 15:29:18.435466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.739 [2024-10-28 15:29:18.457322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:31.739 [2024-10-28 15:29:18.457405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.739 [2024-10-28 15:29:18.457450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.739 [2024-10-28 15:29:18.483435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:31.739 [2024-10-28 15:29:18.483516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.739 [2024-10-28 15:29:18.483561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.739 [2024-10-28 15:29:18.512760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:31.739 [2024-10-28 15:29:18.512797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.739 [2024-10-28 15:29:18.512817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.739 [2024-10-28 15:29:18.543646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:31.739 [2024-10-28 15:29:18.543722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.739 [2024-10-28 15:29:18.543742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.739 [2024-10-28 15:29:18.575625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:31.739 [2024-10-28 15:29:18.575718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.739 [2024-10-28 15:29:18.575739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.739 [2024-10-28 15:29:18.604082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:31.739 [2024-10-28 15:29:18.604169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.739 [2024-10-28 15:29:18.604216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.998 [2024-10-28 15:29:18.634748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:31.998 [2024-10-28 15:29:18.634786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.998 [2024-10-28 15:29:18.634806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.998 [2024-10-28 15:29:18.663054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:31.998 [2024-10-28 15:29:18.663146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.998 [2024-10-28 15:29:18.663192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.998 [2024-10-28 15:29:18.685294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:31.998 [2024-10-28 15:29:18.685374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.998 [2024-10-28 15:29:18.685419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.998 [2024-10-28 15:29:18.712843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:31.998 [2024-10-28 15:29:18.712880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.998 [2024-10-28 15:29:18.712900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.998 [2024-10-28 15:29:18.740826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:31.998 [2024-10-28 15:29:18.740862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.998 [2024-10-28 15:29:18.740881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.998 [2024-10-28 15:29:18.770926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:31.998 [2024-10-28 15:29:18.771032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.998 [2024-10-28 15:29:18.771077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.998 [2024-10-28 15:29:18.793860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:31.998 [2024-10-28 15:29:18.793897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.998 [2024-10-28 15:29:18.793951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.998 [2024-10-28 15:29:18.826860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:31.998 [2024-10-28 15:29:18.826896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.998 [2024-10-28 15:29:18.826916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.998 [2024-10-28 15:29:18.855743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:31.998 [2024-10-28 15:29:18.855781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.998 [2024-10-28 15:29:18.855809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.257 [2024-10-28 15:29:18.881086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:32.257 [2024-10-28 15:29:18.881173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.257 [2024-10-28 15:29:18.881218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.257 [2024-10-28 15:29:18.907109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:32.257 [2024-10-28 15:29:18.907194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.257 [2024-10-28 15:29:18.907238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.257 [2024-10-28 15:29:18.939316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:32.257 [2024-10-28 15:29:18.939405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.257 [2024-10-28 15:29:18.939450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.257 [2024-10-28 15:29:18.963970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:32.257 [2024-10-28 15:29:18.964052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.257 [2024-10-28 15:29:18.964097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.257 [2024-10-28 15:29:18.999286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:32.257 [2024-10-28 15:29:18.999368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.257 [2024-10-28 15:29:18.999412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.257 [2024-10-28 15:29:19.027621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:32.257 [2024-10-28 15:29:19.027724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.257 [2024-10-28 15:29:19.027744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.257 [2024-10-28 15:29:19.059763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:32.257 [2024-10-28 15:29:19.059801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.257 [2024-10-28 15:29:19.059821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.257 [2024-10-28 15:29:19.091713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:32.257 [2024-10-28 15:29:19.091752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.257 [2024-10-28 15:29:19.091772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.257 [2024-10-28 15:29:19.120924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:32.257 [2024-10-28 15:29:19.120970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.257 [2024-10-28 15:29:19.120991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.517 [2024-10-28 15:29:19.144701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:32.517 [2024-10-28 15:29:19.144741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.517 [2024-10-28 15:29:19.144761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.517 [2024-10-28 15:29:19.169284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:32.517 [2024-10-28 15:29:19.169366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.517 [2024-10-28 15:29:19.169411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.517 [2024-10-28 15:29:19.198008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:32.517 [2024-10-28 15:29:19.198090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.517 [2024-10-28 15:29:19.198136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.517 8853.00 IOPS, 34.58 MiB/s [2024-10-28T14:29:19.384Z] [2024-10-28 15:29:19.231297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:32.517 [2024-10-28 15:29:19.231378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.517 [2024-10-28 15:29:19.231422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.517 [2024-10-28 15:29:19.264587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:32.517 [2024-10-28 15:29:19.264696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.517 [2024-10-28 15:29:19.264719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.517 [2024-10-28 15:29:19.294327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:32.517 [2024-10-28 15:29:19.294409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.517 [2024-10-28 15:29:19.294453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.517 [2024-10-28 15:29:19.328059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:32.517 [2024-10-28 15:29:19.328142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.517 [2024-10-28 15:29:19.328186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.517 [2024-10-28 15:29:19.355611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:32.517 [2024-10-28 15:29:19.355709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.517 [2024-10-28 15:29:19.355743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.517 [2024-10-28 15:29:19.377897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:32.517 [2024-10-28 15:29:19.377990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.517 [2024-10-28 15:29:19.378034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.777 [2024-10-28 15:29:19.412226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:32.777 [2024-10-28 15:29:19.412313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.777 [2024-10-28 15:29:19.412358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.777 [2024-10-28 15:29:19.433839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:32.777 [2024-10-28 15:29:19.433876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.777 [2024-10-28 15:29:19.433896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.777 [2024-10-28 15:29:19.463761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:32.777 [2024-10-28 15:29:19.463799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.777 [2024-10-28 15:29:19.463820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.777 [2024-10-28 15:29:19.493317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:32.777 [2024-10-28 15:29:19.493399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.777 [2024-10-28 15:29:19.493444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.777 [2024-10-28 15:29:19.524422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:32.777 [2024-10-28 15:29:19.524501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.777 [2024-10-28 15:29:19.524543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.777 [2024-10-28 15:29:19.554160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:32.777 [2024-10-28 15:29:19.554244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.777 [2024-10-28 15:29:19.554290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.777 [2024-10-28 15:29:19.585672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:32.777 [2024-10-28 15:29:19.585728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.777 [2024-10-28 15:29:19.585749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.777 [2024-10-28 15:29:19.617632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:32.777 [2024-10-28 15:29:19.617726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.777 [2024-10-28 15:29:19.617748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.777 [2024-10-28 15:29:19.638892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:32.777 [2024-10-28 15:29:19.638931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.777 [2024-10-28 15:29:19.638992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.037 [2024-10-28 15:29:19.670468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:33.037 [2024-10-28 15:29:19.670552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.037 [2024-10-28 15:29:19.670598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.037 [2024-10-28 15:29:19.699437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:33.037 [2024-10-28 15:29:19.699518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.037 [2024-10-28 15:29:19.699561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.037 [2024-10-28 15:29:19.731469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:33.037 [2024-10-28 15:29:19.731561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.037 [2024-10-28 15:29:19.731605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.037 [2024-10-28 15:29:19.764441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:33.037 [2024-10-28 15:29:19.764525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.037 [2024-10-28 15:29:19.764573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.037 [2024-10-28 15:29:19.794010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:33.037 [2024-10-28 15:29:19.794094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.037 [2024-10-28 15:29:19.794139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.037 [2024-10-28 15:29:19.816905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:33.037 [2024-10-28 15:29:19.816967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.037 [2024-10-28 15:29:19.817014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.037 [2024-10-28 15:29:19.844543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:33.037 [2024-10-28 15:29:19.844624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.037 [2024-10-28 15:29:19.844698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.037 [2024-10-28 15:29:19.868844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:33.037 [2024-10-28 15:29:19.868882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.037 [2024-10-28 15:29:19.868921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.037 [2024-10-28 15:29:19.891823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:33.037 [2024-10-28 15:29:19.891860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.037 [2024-10-28 15:29:19.891880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.298 [2024-10-28 15:29:19.923201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:33.298 [2024-10-28 15:29:19.923287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.298 [2024-10-28 15:29:19.923332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.298 [2024-10-28 15:29:19.959994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:33.298 [2024-10-28 15:29:19.960077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.298 [2024-10-28 15:29:19.960121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.298 [2024-10-28 15:29:20.000270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:33.298 [2024-10-28 15:29:20.000349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.298 [2024-10-28 15:29:20.000396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.298 [2024-10-28 15:29:20.029029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:33.298 [2024-10-28 15:29:20.029133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.298 [2024-10-28 15:29:20.029179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.298 [2024-10-28 15:29:20.074531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:33.298 [2024-10-28 15:29:20.074613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.298 [2024-10-28 15:29:20.074677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.298 [2024-10-28 15:29:20.110598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:33.298 [2024-10-28 15:29:20.110704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.298 [2024-10-28 15:29:20.110752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.298 [2024-10-28 15:29:20.137349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:33.298 [2024-10-28 15:29:20.137430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.298 [2024-10-28 15:29:20.137492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.559 [2024-10-28 15:29:20.171643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:33.559 [2024-10-28 15:29:20.171753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.559 [2024-10-28 15:29:20.171797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.559 [2024-10-28 15:29:20.207679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c2100) 00:33:33.559 [2024-10-28 15:29:20.207774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.559 [2024-10-28 15:29:20.207820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.559 8585.00 IOPS, 33.54 MiB/s 00:33:33.559 Latency(us) 00:33:33.559 [2024-10-28T14:29:20.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:33.559 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:33.559 nvme0n1 : 2.05 8431.37 32.94 0.00 0.00 14860.35 7087.60 52428.80 00:33:33.559 [2024-10-28T14:29:20.426Z] =================================================================================================================== 00:33:33.559 [2024-10-28T14:29:20.426Z] Total : 8431.37 32.94 0.00 0.00 14860.35 7087.60 52428.80 00:33:33.559 { 00:33:33.559 "results": [ 00:33:33.559 { 00:33:33.559 "job": "nvme0n1", 00:33:33.559 "core_mask": "0x2", 00:33:33.559 "workload": "randread", 00:33:33.559 "status": "finished", 00:33:33.559 "queue_depth": 128, 00:33:33.559 "io_size": 4096, 00:33:33.559 "runtime": 2.051625, 00:33:33.559 "iops": 8431.365381100348, 00:33:33.559 "mibps": 32.935021019923234, 00:33:33.559 "io_failed": 0, 00:33:33.559 "io_timeout": 0, 00:33:33.559 "avg_latency_us": 14860.352477143579, 00:33:33.559 "min_latency_us": 7087.597037037037, 00:33:33.559 "max_latency_us": 52428.8 00:33:33.559 } 00:33:33.559 ], 00:33:33.559 "core_count": 1 00:33:33.559 } 00:33:33.559 15:29:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:33.559 15:29:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:33.559 15:29:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:33.559 | .driver_specific 00:33:33.559 | .nvme_error 00:33:33.559 | .status_code 00:33:33.559 | .command_transient_transport_error' 00:33:33.559 15:29:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:34.129 15:29:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 67 > 0 )) 00:33:34.129 15:29:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3310310 00:33:34.129 15:29:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3310310 ']' 00:33:34.129 15:29:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3310310 00:33:34.129 15:29:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:33:34.129 15:29:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:34.129 15:29:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3310310 00:33:34.129 15:29:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:34.129 15:29:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:34.129 15:29:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3310310' 00:33:34.130 killing process with pid 3310310 00:33:34.130 15:29:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3310310 00:33:34.130 Received shutdown signal, test time was about 2.000000 seconds 00:33:34.130 00:33:34.130 Latency(us) 00:33:34.130 [2024-10-28T14:29:20.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:34.130 [2024-10-28T14:29:20.997Z] =================================================================================================================== 00:33:34.130 [2024-10-28T14:29:20.997Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:34.130 15:29:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3310310 00:33:34.389 15:29:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:34.389 15:29:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:34.390 15:29:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:34.390 15:29:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:34.390 15:29:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:34.390 15:29:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3310852 00:33:34.390 15:29:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:34.390 15:29:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3310852 /var/tmp/bperf.sock 00:33:34.390 15:29:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3310852 ']' 00:33:34.390 15:29:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:34.390 15:29:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:34.390 15:29:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:34.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:34.390 15:29:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:34.390 15:29:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:34.390 [2024-10-28 15:29:21.146426] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:33:34.390 [2024-10-28 15:29:21.146547] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3310852 ] 00:33:34.390 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:34.390 Zero copy mechanism will not be used. 00:33:34.649 [2024-10-28 15:29:21.265452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.649 [2024-10-28 15:29:21.375944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:34.909 15:29:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:34.909 15:29:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:33:34.909 15:29:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:34.909 15:29:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:35.168 15:29:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:35.168 15:29:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.168 15:29:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:35.168 15:29:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.168 15:29:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:35.168 15:29:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:36.106 nvme0n1 00:33:36.106 15:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:36.106 15:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.106 15:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:36.106 15:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.106 15:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:36.106 15:29:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:36.106 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:36.106 Zero copy mechanism will not be used. 00:33:36.106 Running I/O for 2 seconds... 00:33:36.106 [2024-10-28 15:29:22.823410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.106 [2024-10-28 15:29:22.823525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.106 [2024-10-28 15:29:22.823576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.106 [2024-10-28 15:29:22.834704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.106 [2024-10-28 15:29:22.834742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.106 [2024-10-28 15:29:22.834768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.106 [2024-10-28 15:29:22.846145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.106 [2024-10-28 15:29:22.846223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.106 [2024-10-28 15:29:22.846268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.106 [2024-10-28 15:29:22.857366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.106 [2024-10-28 15:29:22.857446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.106 [2024-10-28 15:29:22.857493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.106 [2024-10-28 15:29:22.868519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.106 [2024-10-28 15:29:22.868602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.106 [2024-10-28 15:29:22.868668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.106 [2024-10-28 15:29:22.879795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.106 [2024-10-28 15:29:22.879873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.106 [2024-10-28 15:29:22.879918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.106 [2024-10-28 15:29:22.890917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.106 [2024-10-28 15:29:22.890957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.106 [2024-10-28 15:29:22.891006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.106 [2024-10-28 15:29:22.904516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.106 [2024-10-28 15:29:22.904594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.106 [2024-10-28 15:29:22.904638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.106 [2024-10-28 15:29:22.914409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.106 [2024-10-28 15:29:22.914491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.106 [2024-10-28 15:29:22.914535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.106 [2024-10-28 15:29:22.927523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.106 [2024-10-28 15:29:22.927604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.106 [2024-10-28 15:29:22.927673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.106 [2024-10-28 15:29:22.937453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.106 [2024-10-28 15:29:22.937530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.106 [2024-10-28 15:29:22.937576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.106 [2024-10-28 15:29:22.948804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.106 [2024-10-28 15:29:22.948883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.106 [2024-10-28 15:29:22.948929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.106 [2024-10-28 15:29:22.960934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.106 [2024-10-28 15:29:22.961012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.106 [2024-10-28 15:29:22.961056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.366 [2024-10-28 15:29:22.973227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.366 [2024-10-28 15:29:22.973312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.366 [2024-10-28 15:29:22.973374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.366 [2024-10-28 15:29:22.984842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.366 [2024-10-28 15:29:22.984879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.366 [2024-10-28 15:29:22.984900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.366 [2024-10-28 15:29:22.998321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.366 [2024-10-28 15:29:22.998402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.366 [2024-10-28 15:29:22.998448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.366 [2024-10-28 15:29:23.008964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.366 [2024-10-28 15:29:23.009046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.366 [2024-10-28 15:29:23.009092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.366 [2024-10-28 15:29:23.021142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.366 [2024-10-28 15:29:23.021223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.366 [2024-10-28 15:29:23.021269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.367 [2024-10-28 15:29:23.033259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.367 [2024-10-28 15:29:23.033340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.367 [2024-10-28 15:29:23.033384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.367 [2024-10-28 15:29:23.045030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.367 [2024-10-28 15:29:23.045107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.367 [2024-10-28 15:29:23.045151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.367 [2024-10-28 15:29:23.056934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.367 [2024-10-28 15:29:23.057036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.367 [2024-10-28 15:29:23.057082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.367 [2024-10-28 15:29:23.069076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.367 [2024-10-28 15:29:23.069157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.367 [2024-10-28 15:29:23.069203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.367 [2024-10-28 15:29:23.083191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.367 [2024-10-28 15:29:23.083289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.367 [2024-10-28 15:29:23.083337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.367 [2024-10-28 15:29:23.092666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.367 [2024-10-28 15:29:23.092725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.367 [2024-10-28 15:29:23.092744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.367 [2024-10-28 15:29:23.103920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.367 [2024-10-28 15:29:23.103998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.367 [2024-10-28 15:29:23.104042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.367 [2024-10-28 15:29:23.115522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.367 [2024-10-28 15:29:23.115602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.367 [2024-10-28 15:29:23.115646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.367 [2024-10-28 15:29:23.126060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.367 [2024-10-28 15:29:23.126137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.367 [2024-10-28 15:29:23.126181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.367 [2024-10-28 15:29:23.137423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.367 [2024-10-28 15:29:23.137503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.367 [2024-10-28 15:29:23.137548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.367 [2024-10-28 15:29:23.149046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.367 [2024-10-28 15:29:23.149126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.367 [2024-10-28 15:29:23.149171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.367 [2024-10-28 15:29:23.161626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.367 [2024-10-28 15:29:23.161718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.367 [2024-10-28 15:29:23.161740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.367 [2024-10-28 15:29:23.173387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.367 [2024-10-28 15:29:23.173483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.367 [2024-10-28 15:29:23.173533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.367 [2024-10-28 15:29:23.184812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.367 [2024-10-28 15:29:23.184847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.367 [2024-10-28 15:29:23.184867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.367 [2024-10-28 15:29:23.197318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.367 [2024-10-28 15:29:23.197398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.367 [2024-10-28 15:29:23.197445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.367 [2024-10-28 15:29:23.211196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.367 [2024-10-28 15:29:23.211276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.367 [2024-10-28 15:29:23.211321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.367 [2024-10-28 15:29:23.223911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.367 [2024-10-28 15:29:23.223949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.367 [2024-10-28 15:29:23.223970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.626 [2024-10-28 15:29:23.238538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.626 [2024-10-28 15:29:23.238622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.626 [2024-10-28 15:29:23.238688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.626 [2024-10-28 15:29:23.250835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.626 [2024-10-28 15:29:23.250920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.626 [2024-10-28 15:29:23.250967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.626 [2024-10-28 15:29:23.264847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.626 [2024-10-28 15:29:23.264929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.626 [2024-10-28 15:29:23.264975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.626 [2024-10-28 15:29:23.279162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.626 [2024-10-28 15:29:23.279247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.626 [2024-10-28 15:29:23.279293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.626 [2024-10-28 15:29:23.292921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.626 [2024-10-28 15:29:23.293002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.626 [2024-10-28 15:29:23.293064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.626 [2024-10-28 15:29:23.305256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.626 [2024-10-28 15:29:23.305338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.626 [2024-10-28 15:29:23.305385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.626 [2024-10-28 15:29:23.317768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.626 [2024-10-28 15:29:23.317847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.626 [2024-10-28 15:29:23.317893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.626 [2024-10-28 15:29:23.330096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.626 [2024-10-28 15:29:23.330181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.626 [2024-10-28 15:29:23.330227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.626 [2024-10-28 15:29:23.342456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.626 [2024-10-28 15:29:23.342536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.626 [2024-10-28 15:29:23.342581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.626 [2024-10-28 15:29:23.354911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.626 [2024-10-28 15:29:23.354990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.626 [2024-10-28 15:29:23.355036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.626 [2024-10-28 15:29:23.367313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.626 [2024-10-28 15:29:23.367395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.626 [2024-10-28 15:29:23.367440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.626 [2024-10-28 15:29:23.375314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.626 [2024-10-28 15:29:23.375392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.626 [2024-10-28 15:29:23.375437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.626 [2024-10-28 15:29:23.384836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.626 [2024-10-28 15:29:23.384872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.626 [2024-10-28 15:29:23.384893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.626 [2024-10-28 15:29:23.396191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.626 [2024-10-28 15:29:23.396280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.626 [2024-10-28 15:29:23.396328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.626 [2024-10-28 15:29:23.404174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.626 [2024-10-28 15:29:23.404251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.626 [2024-10-28 15:29:23.404295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.627 [2024-10-28 15:29:23.413989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.627 [2024-10-28 15:29:23.414066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.627 [2024-10-28 15:29:23.414111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.627 [2024-10-28 15:29:23.425320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.627 [2024-10-28 15:29:23.425398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.627 [2024-10-28 15:29:23.425444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.627 [2024-10-28 15:29:23.436827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.627 [2024-10-28 15:29:23.436864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.627 [2024-10-28 15:29:23.436883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.627 [2024-10-28 15:29:23.448835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.627 [2024-10-28 15:29:23.448871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.627 [2024-10-28 15:29:23.448891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.627 [2024-10-28 15:29:23.460788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.627 [2024-10-28 15:29:23.460825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.627 [2024-10-28 15:29:23.460845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.627 [2024-10-28 15:29:23.472841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.627 [2024-10-28 15:29:23.472878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.627 [2024-10-28 15:29:23.472898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.627 [2024-10-28 15:29:23.484823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.627 [2024-10-28 15:29:23.484860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.627 [2024-10-28 15:29:23.484886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.886 [2024-10-28 15:29:23.497213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.886 [2024-10-28 15:29:23.497300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.886 [2024-10-28 15:29:23.497349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.886 [2024-10-28 15:29:23.508877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.886 [2024-10-28 15:29:23.508915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.886 [2024-10-28 15:29:23.508936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.886 [2024-10-28 15:29:23.520431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.886 [2024-10-28 15:29:23.520511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.886 [2024-10-28 15:29:23.520556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.886 [2024-10-28 15:29:23.530979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.886 [2024-10-28 15:29:23.531058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.886 [2024-10-28 15:29:23.531103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.886 [2024-10-28 15:29:23.542617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.886 [2024-10-28 15:29:23.542717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.886 [2024-10-28 15:29:23.542739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.886 [2024-10-28 15:29:23.554342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.886 [2024-10-28 15:29:23.554422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.886 [2024-10-28 15:29:23.554467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.886 [2024-10-28 15:29:23.566329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.886 [2024-10-28 15:29:23.566407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.886 [2024-10-28 15:29:23.566451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.886 [2024-10-28 15:29:23.578028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.886 [2024-10-28 15:29:23.578107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.886 [2024-10-28 15:29:23.578153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.886 [2024-10-28 15:29:23.590272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.886 [2024-10-28 15:29:23.590375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.886 [2024-10-28 15:29:23.590423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.886 [2024-10-28 15:29:23.602785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.886 [2024-10-28 15:29:23.602821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.886 [2024-10-28 15:29:23.602842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.886 [2024-10-28 15:29:23.614812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.886 [2024-10-28 15:29:23.614847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.886 [2024-10-28 15:29:23.614867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.886 [2024-10-28 15:29:23.626586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.886 [2024-10-28 15:29:23.626683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.886 [2024-10-28 15:29:23.626722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.886 [2024-10-28 15:29:23.638926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.886 [2024-10-28 15:29:23.639020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.886 [2024-10-28 15:29:23.639067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.886 [2024-10-28 15:29:23.652123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.887 [2024-10-28 15:29:23.652160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.887 [2024-10-28 15:29:23.652181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.887 [2024-10-28 15:29:23.663983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.887 [2024-10-28 15:29:23.664067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.887 [2024-10-28 15:29:23.664113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.887 [2024-10-28 15:29:23.676697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.887 [2024-10-28 15:29:23.676778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.887 [2024-10-28 15:29:23.676826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.887 [2024-10-28 15:29:23.689225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.887 [2024-10-28 15:29:23.689309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.887 [2024-10-28 15:29:23.689355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.887 [2024-10-28 15:29:23.700940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.887 [2024-10-28 15:29:23.701021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.887 [2024-10-28 15:29:23.701066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.887 [2024-10-28 15:29:23.714059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.887 [2024-10-28 15:29:23.714141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.887 [2024-10-28 15:29:23.714187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.887 [2024-10-28 15:29:23.725556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.887 [2024-10-28 15:29:23.725635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.887 [2024-10-28 15:29:23.725704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.887 [2024-10-28 15:29:23.736889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.887 [2024-10-28 15:29:23.736973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.887 [2024-10-28 15:29:23.737019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.887 [2024-10-28 15:29:23.748493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:36.887 [2024-10-28 15:29:23.748577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.887 [2024-10-28 15:29:23.748626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.147 [2024-10-28 15:29:23.761619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.147 [2024-10-28 15:29:23.761734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.147 [2024-10-28 15:29:23.761783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.147 [2024-10-28 15:29:23.775039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.147 [2024-10-28 15:29:23.775123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.147 [2024-10-28 15:29:23.775168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.147 [2024-10-28 15:29:23.787789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.147 [2024-10-28 15:29:23.787826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.147 [2024-10-28 15:29:23.787847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.147 [2024-10-28 15:29:23.800919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.147 [2024-10-28 15:29:23.801008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.147 [2024-10-28 15:29:23.801069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.147 2581.00 IOPS, 322.62 MiB/s [2024-10-28T14:29:24.014Z] [2024-10-28 15:29:23.816679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.147 [2024-10-28 15:29:23.816781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.147 [2024-10-28 15:29:23.816828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.147 [2024-10-28 15:29:23.823342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.147 [2024-10-28 15:29:23.823420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.147 [2024-10-28 15:29:23.823465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.147 [2024-10-28 15:29:23.833917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.147 [2024-10-28 15:29:23.833954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.147 [2024-10-28 15:29:23.833994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.147 [2024-10-28 15:29:23.844021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.147 [2024-10-28 15:29:23.844102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.147 [2024-10-28 15:29:23.844148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.147 [2024-10-28 15:29:23.854242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.147 [2024-10-28 15:29:23.854320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.147 [2024-10-28 15:29:23.854365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.147 [2024-10-28 15:29:23.864532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.147 [2024-10-28 15:29:23.864611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.147 [2024-10-28 15:29:23.864674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.147 [2024-10-28 15:29:23.874844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.147 [2024-10-28 15:29:23.874879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.147 [2024-10-28 15:29:23.874899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.147 [2024-10-28 15:29:23.884914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.147 [2024-10-28 15:29:23.884949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.147 [2024-10-28 15:29:23.884969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.147 [2024-10-28 15:29:23.895348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.147 [2024-10-28 15:29:23.895428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.147 [2024-10-28 15:29:23.895473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.147 [2024-10-28 15:29:23.905535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.147 [2024-10-28 15:29:23.905612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.147 [2024-10-28 15:29:23.905672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.147 [2024-10-28 15:29:23.915727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.147 [2024-10-28 15:29:23.915763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.147 [2024-10-28 15:29:23.915783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.147 [2024-10-28 15:29:23.925152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.147 [2024-10-28 15:29:23.925231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.147 [2024-10-28 15:29:23.925276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.147 [2024-10-28 15:29:23.935156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.147 [2024-10-28 15:29:23.935234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.147 [2024-10-28 15:29:23.935281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.147 [2024-10-28 15:29:23.945819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.147 [2024-10-28 15:29:23.945856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.147 [2024-10-28 15:29:23.945876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.147 [2024-10-28 15:29:23.955733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.147 [2024-10-28 15:29:23.955770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.147 [2024-10-28 15:29:23.955790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.147 [2024-10-28 15:29:23.966141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.147 [2024-10-28 15:29:23.966226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.147 [2024-10-28 15:29:23.966270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.147 [2024-10-28 15:29:23.975941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.147 [2024-10-28 15:29:23.976019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.147 [2024-10-28 15:29:23.976082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.147 [2024-10-28 15:29:23.986358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.147 [2024-10-28 15:29:23.986435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.147 [2024-10-28 15:29:23.986481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.147 [2024-10-28 15:29:23.996200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.147 [2024-10-28 15:29:23.996279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.147 [2024-10-28 15:29:23.996323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.147 [2024-10-28 15:29:24.006171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.147 [2024-10-28 15:29:24.006247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.147 [2024-10-28 15:29:24.006292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.408 [2024-10-28 15:29:24.016852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.408 [2024-10-28 15:29:24.016899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.408 [2024-10-28 15:29:24.016921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.408 [2024-10-28 15:29:24.027077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.408 [2024-10-28 15:29:24.027161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.408 [2024-10-28 15:29:24.027209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.408 [2024-10-28 15:29:24.037272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.408 [2024-10-28 15:29:24.037351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.408 [2024-10-28 15:29:24.037396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.408 [2024-10-28 15:29:24.047184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.408 [2024-10-28 15:29:24.047263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.408 [2024-10-28 15:29:24.047307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.408 [2024-10-28 15:29:24.057352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.408 [2024-10-28 15:29:24.057432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.408 [2024-10-28 15:29:24.057478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.408 [2024-10-28 15:29:24.067426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.408 [2024-10-28 15:29:24.067520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.408 [2024-10-28 15:29:24.067568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.408 [2024-10-28 15:29:24.077883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.408 [2024-10-28 15:29:24.077919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.408 [2024-10-28 15:29:24.077939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.408 [2024-10-28 15:29:24.088056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.408 [2024-10-28 15:29:24.088134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.408 [2024-10-28 15:29:24.088178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.408 [2024-10-28 15:29:24.098270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.408 [2024-10-28 15:29:24.098348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.408 [2024-10-28 15:29:24.098392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.408 [2024-10-28 15:29:24.108293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.408 [2024-10-28 15:29:24.108372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.408 [2024-10-28 15:29:24.108416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.408 [2024-10-28 15:29:24.118104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.408 [2024-10-28 15:29:24.118182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.408 [2024-10-28 15:29:24.118227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.408 [2024-10-28 15:29:24.127940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.408 [2024-10-28 15:29:24.128031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.408 [2024-10-28 15:29:24.128075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.408 [2024-10-28 15:29:24.137915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.408 [2024-10-28 15:29:24.137995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.408 [2024-10-28 15:29:24.138039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.408 [2024-10-28 15:29:24.148249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.408 [2024-10-28 15:29:24.148328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.408 [2024-10-28 15:29:24.148380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.408 [2024-10-28 15:29:24.158707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.408 [2024-10-28 15:29:24.158765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.408 [2024-10-28 15:29:24.158786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.408 [2024-10-28 15:29:24.170879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.408 [2024-10-28 15:29:24.170923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.408 [2024-10-28 15:29:24.170978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.408 [2024-10-28 15:29:24.182864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.408 [2024-10-28 15:29:24.182903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.408 [2024-10-28 15:29:24.182924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.408 [2024-10-28 15:29:24.193798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.408 [2024-10-28 15:29:24.193835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.408 [2024-10-28 15:29:24.193855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.408 [2024-10-28 15:29:24.204360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.408 [2024-10-28 15:29:24.204443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.408 [2024-10-28 15:29:24.204490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.408 [2024-10-28 15:29:24.214317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.408 [2024-10-28 15:29:24.214399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.408 [2024-10-28 15:29:24.214444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.408 [2024-10-28 15:29:24.224783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.408 [2024-10-28 15:29:24.224819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.408 [2024-10-28 15:29:24.224840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.408 [2024-10-28 15:29:24.235646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.408 [2024-10-28 15:29:24.235746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.408 [2024-10-28 15:29:24.235768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.408 [2024-10-28 15:29:24.245774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.408 [2024-10-28 15:29:24.245811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.408 [2024-10-28 15:29:24.245840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.408 [2024-10-28 15:29:24.256399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.408 [2024-10-28 15:29:24.256503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.408 [2024-10-28 15:29:24.256553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.408 [2024-10-28 15:29:24.266929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.408 [2024-10-28 15:29:24.266965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.408 [2024-10-28 15:29:24.267015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.669 [2024-10-28 15:29:24.278292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.669 [2024-10-28 15:29:24.278380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.669 [2024-10-28 15:29:24.278428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.669 [2024-10-28 15:29:24.288141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.669 [2024-10-28 15:29:24.288225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.669 [2024-10-28 15:29:24.288271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.669 [2024-10-28 15:29:24.298309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.669 [2024-10-28 15:29:24.298390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.669 [2024-10-28 15:29:24.298436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.669 [2024-10-28 15:29:24.308849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.669 [2024-10-28 15:29:24.308886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.669 [2024-10-28 15:29:24.308907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.669 [2024-10-28 15:29:24.318934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.669 [2024-10-28 15:29:24.319027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.669 [2024-10-28 15:29:24.319073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.669 [2024-10-28 15:29:24.329027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.669 [2024-10-28 15:29:24.329108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.669 [2024-10-28 15:29:24.329154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.669 [2024-10-28 15:29:24.339699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.669 [2024-10-28 15:29:24.339769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.669 [2024-10-28 15:29:24.339791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.669 [2024-10-28 15:29:24.350137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.669 [2024-10-28 15:29:24.350217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.669 [2024-10-28 15:29:24.350262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.669 [2024-10-28 15:29:24.360169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.669 [2024-10-28 15:29:24.360248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.669 [2024-10-28 15:29:24.360294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.669 [2024-10-28 15:29:24.370239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.669 [2024-10-28 15:29:24.370317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.669 [2024-10-28 15:29:24.370361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.669 [2024-10-28 15:29:24.380034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.669 [2024-10-28 15:29:24.380111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.669 [2024-10-28 15:29:24.380155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.669 [2024-10-28 15:29:24.390144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.669 [2024-10-28 15:29:24.390225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.669 [2024-10-28 15:29:24.390270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.669 [2024-10-28 15:29:24.400385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.669 [2024-10-28 15:29:24.400464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.669 [2024-10-28 15:29:24.400511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.669 [2024-10-28 15:29:24.411563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.669 [2024-10-28 15:29:24.411646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.669 [2024-10-28 15:29:24.411715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.669 [2024-10-28 15:29:24.421915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.669 [2024-10-28 15:29:24.421996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.669 [2024-10-28 15:29:24.422041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.669 [2024-10-28 15:29:24.432354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.669 [2024-10-28 15:29:24.432433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.669 [2024-10-28 15:29:24.432477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.669 [2024-10-28 15:29:24.442288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.669 [2024-10-28 15:29:24.442367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.669 [2024-10-28 15:29:24.442412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.669 [2024-10-28 15:29:24.452364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.669 [2024-10-28 15:29:24.452441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.669 [2024-10-28 15:29:24.452486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.669 [2024-10-28 15:29:24.462285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.669 [2024-10-28 15:29:24.462365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.669 [2024-10-28 15:29:24.462410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.669 [2024-10-28 15:29:24.472706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.669 [2024-10-28 15:29:24.472766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.669 [2024-10-28 15:29:24.472787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.669 [2024-10-28 15:29:24.482350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.669 [2024-10-28 15:29:24.482429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.669 [2024-10-28 15:29:24.482473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.669 [2024-10-28 15:29:24.492822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.669 [2024-10-28 15:29:24.492856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.669 [2024-10-28 15:29:24.492876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.669 [2024-10-28 15:29:24.502384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.669 [2024-10-28 15:29:24.502459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.669 [2024-10-28 15:29:24.502504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.669 [2024-10-28 15:29:24.512984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.669 [2024-10-28 15:29:24.513064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.669 [2024-10-28 15:29:24.513125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.669 [2024-10-28 15:29:24.523112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.669 [2024-10-28 15:29:24.523195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.669 [2024-10-28 15:29:24.523241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.930 [2024-10-28 15:29:24.534206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.931 [2024-10-28 15:29:24.534292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.931 [2024-10-28 15:29:24.534340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.931 [2024-10-28 15:29:24.545578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.931 [2024-10-28 15:29:24.545680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.931 [2024-10-28 15:29:24.545743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.931 [2024-10-28 15:29:24.556311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.931 [2024-10-28 15:29:24.556395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.931 [2024-10-28 15:29:24.556441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.931 [2024-10-28 15:29:24.567093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.931 [2024-10-28 15:29:24.567174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.931 [2024-10-28 15:29:24.567221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.931 [2024-10-28 15:29:24.577146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.931 [2024-10-28 15:29:24.577227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.931 [2024-10-28 15:29:24.577272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.931 [2024-10-28 15:29:24.587777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.931 [2024-10-28 15:29:24.587814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.931 [2024-10-28 15:29:24.587836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.931 [2024-10-28 15:29:24.599313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.931 [2024-10-28 15:29:24.599395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.931 [2024-10-28 15:29:24.599442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.931 [2024-10-28 15:29:24.609757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.931 [2024-10-28 15:29:24.609800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.931 [2024-10-28 15:29:24.609821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.931 [2024-10-28 15:29:24.620284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.931 [2024-10-28 15:29:24.620366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.931 [2024-10-28 15:29:24.620413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.931 [2024-10-28 15:29:24.630332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.931 [2024-10-28 15:29:24.630413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.931 [2024-10-28 15:29:24.630457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.931 [2024-10-28 15:29:24.641980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.931 [2024-10-28 15:29:24.642063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.931 [2024-10-28 15:29:24.642110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.931 [2024-10-28 15:29:24.652110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.931 [2024-10-28 15:29:24.652189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.931 [2024-10-28 15:29:24.652234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.931 [2024-10-28 15:29:24.662134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.931 [2024-10-28 15:29:24.662214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.931 [2024-10-28 15:29:24.662258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.931 [2024-10-28 15:29:24.674141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.931 [2024-10-28 15:29:24.674241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.931 [2024-10-28 15:29:24.674285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.931 [2024-10-28 15:29:24.682823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.931 [2024-10-28 15:29:24.682858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.931 [2024-10-28 15:29:24.682878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.931 [2024-10-28 15:29:24.693510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.931 [2024-10-28 15:29:24.693591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.931 [2024-10-28 15:29:24.693636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.931 [2024-10-28 15:29:24.704972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.931 [2024-10-28 15:29:24.705051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.931 [2024-10-28 15:29:24.705096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.931 [2024-10-28 15:29:24.715690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.931 [2024-10-28 15:29:24.715753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.931 [2024-10-28 15:29:24.715774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.931 [2024-10-28 15:29:24.724886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.931 [2024-10-28 15:29:24.724926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.931 [2024-10-28 15:29:24.724947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.931 [2024-10-28 15:29:24.735208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.931 [2024-10-28 15:29:24.735287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.931 [2024-10-28 15:29:24.735332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.931 [2024-10-28 15:29:24.745817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.931 [2024-10-28 15:29:24.745853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.931 [2024-10-28 15:29:24.745873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.931 [2024-10-28 15:29:24.756841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.931 [2024-10-28 15:29:24.756886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.931 [2024-10-28 15:29:24.756907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.931 [2024-10-28 15:29:24.768373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.931 [2024-10-28 15:29:24.768457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.931 [2024-10-28 15:29:24.768504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.931 [2024-10-28 15:29:24.779768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.931 [2024-10-28 15:29:24.779805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.931 [2024-10-28 15:29:24.779825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.931 [2024-10-28 15:29:24.788923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:37.931 [2024-10-28 15:29:24.789013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.931 [2024-10-28 15:29:24.789061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.192 [2024-10-28 15:29:24.799241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:38.192 [2024-10-28 15:29:24.799329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.192 [2024-10-28 15:29:24.799377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.192 2783.00 IOPS, 347.88 MiB/s [2024-10-28T14:29:25.059Z] [2024-10-28 15:29:24.812318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1510d50) 00:33:38.192 [2024-10-28 15:29:24.812404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.192 [2024-10-28 15:29:24.812451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.192 00:33:38.192 Latency(us) 00:33:38.192 [2024-10-28T14:29:25.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.192 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:38.192 nvme0n1 : 2.01 2783.37 347.92 0.00 0.00 5737.12 1808.31 17087.91 00:33:38.192 [2024-10-28T14:29:25.059Z] =================================================================================================================== 00:33:38.192 [2024-10-28T14:29:25.059Z] Total : 2783.37 347.92 0.00 0.00 5737.12 1808.31 17087.91 00:33:38.192 { 00:33:38.192 "results": [ 00:33:38.192 { 00:33:38.192 "job": "nvme0n1", 00:33:38.192 "core_mask": "0x2", 00:33:38.192 "workload": "randread", 00:33:38.192 "status": "finished", 00:33:38.192 "queue_depth": 16, 00:33:38.192 "io_size": 131072, 00:33:38.192 "runtime": 2.005483, 00:33:38.192 "iops": 2783.3693928096122, 00:33:38.192 "mibps": 347.92117410120153, 00:33:38.192 "io_failed": 0, 00:33:38.192 "io_timeout": 0, 00:33:38.192 "avg_latency_us": 5737.124252823229, 00:33:38.192 "min_latency_us": 1808.3081481481481, 00:33:38.192 "max_latency_us": 17087.905185185184 00:33:38.192 } 00:33:38.192 ], 00:33:38.192 "core_count": 1 00:33:38.192 } 00:33:38.192 15:29:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:38.192 15:29:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:38.192 15:29:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:38.192 | .driver_specific 00:33:38.192 | .nvme_error 00:33:38.192 | .status_code 00:33:38.192 | .command_transient_transport_error' 00:33:38.192 15:29:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:38.451 15:29:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 180 > 0 )) 00:33:38.451 15:29:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3310852 00:33:38.452 15:29:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3310852 ']' 00:33:38.452 15:29:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3310852 00:33:38.452 15:29:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:33:38.452 15:29:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:38.452 15:29:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3310852 00:33:38.452 15:29:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:38.452 15:29:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:38.452 15:29:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3310852' 00:33:38.452 killing process with pid 3310852 00:33:38.452 15:29:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3310852 00:33:38.452 Received shutdown signal, test time was about 2.000000 seconds 00:33:38.452 00:33:38.452 Latency(us) 00:33:38.452 [2024-10-28T14:29:25.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.452 [2024-10-28T14:29:25.319Z] =================================================================================================================== 00:33:38.452 [2024-10-28T14:29:25.319Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:38.452 15:29:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3310852 00:33:38.710 15:29:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:38.710 15:29:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:38.710 15:29:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:38.710 15:29:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:38.710 15:29:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:38.710 15:29:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3311381 00:33:38.710 15:29:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3311381 /var/tmp/bperf.sock 00:33:38.710 15:29:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:38.710 15:29:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3311381 ']' 00:33:38.710 15:29:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:38.710 15:29:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:38.710 15:29:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:38.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:38.710 15:29:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:38.710 15:29:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:38.970 [2024-10-28 15:29:25.620762] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:33:38.971 [2024-10-28 15:29:25.620873] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3311381 ] 00:33:38.971 [2024-10-28 15:29:25.736486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.231 [2024-10-28 15:29:25.851842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:40.170 15:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:40.170 15:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:33:40.170 15:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:40.170 15:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:40.430 15:29:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:40.430 15:29:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.430 15:29:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:40.430 15:29:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.430 15:29:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:40.430 15:29:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:41.043 nvme0n1 00:33:41.043 15:29:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:41.043 15:29:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.043 15:29:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:41.043 15:29:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.043 15:29:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:41.043 15:29:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:41.304 Running I/O for 2 seconds... 00:33:41.304 [2024-10-28 15:29:27.960956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f6458 00:33:41.304 [2024-10-28 15:29:27.963693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.304 [2024-10-28 15:29:27.963785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:41.304 [2024-10-28 15:29:27.991018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f4f40 00:33:41.304 [2024-10-28 15:29:27.993694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.304 [2024-10-28 15:29:27.993771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:41.304 [2024-10-28 15:29:28.019826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166ef6a8 00:33:41.304 [2024-10-28 15:29:28.022059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.304 [2024-10-28 15:29:28.022133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:41.304 [2024-10-28 15:29:28.043557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166df118 00:33:41.304 [2024-10-28 15:29:28.044770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.304 [2024-10-28 15:29:28.044805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:41.304 [2024-10-28 15:29:28.070729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e0a68 00:33:41.304 [2024-10-28 15:29:28.073420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.304 [2024-10-28 15:29:28.073510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:41.304 [2024-10-28 15:29:28.094804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e12d8 00:33:41.304 [2024-10-28 15:29:28.098261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.304 [2024-10-28 15:29:28.098339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:41.304 [2024-10-28 15:29:28.117002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f57b0 00:33:41.304 [2024-10-28 15:29:28.119674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.304 [2024-10-28 15:29:28.119709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:41.304 [2024-10-28 15:29:28.142735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166ea680 00:33:41.304 [2024-10-28 15:29:28.146370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.304 [2024-10-28 15:29:28.146455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:41.565 [2024-10-28 15:29:28.170493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166ec408 00:33:41.565 [2024-10-28 15:29:28.174155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.565 [2024-10-28 15:29:28.174235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.565 [2024-10-28 15:29:28.200298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e6300 00:33:41.565 [2024-10-28 15:29:28.203472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.565 [2024-10-28 15:29:28.203560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:41.565 [2024-10-28 15:29:28.236114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f7538 00:33:41.565 [2024-10-28 15:29:28.240840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.565 [2024-10-28 15:29:28.240916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:41.565 [2024-10-28 15:29:28.265225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f20d8 00:33:41.565 [2024-10-28 15:29:28.269702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.565 [2024-10-28 15:29:28.269780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:41.565 [2024-10-28 15:29:28.285394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f0788 00:33:41.565 [2024-10-28 15:29:28.287369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.565 [2024-10-28 15:29:28.287442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:41.565 [2024-10-28 15:29:28.318291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e8088 00:33:41.565 [2024-10-28 15:29:28.322311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.565 [2024-10-28 15:29:28.322387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:41.565 [2024-10-28 15:29:28.348257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166fd208 00:33:41.565 [2024-10-28 15:29:28.352690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.565 [2024-10-28 15:29:28.352766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:41.565 [2024-10-28 15:29:28.371212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f0788 00:33:41.565 [2024-10-28 15:29:28.373606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.566 [2024-10-28 15:29:28.373696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:41.566 [2024-10-28 15:29:28.397837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e38d0 00:33:41.566 [2024-10-28 15:29:28.400840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.566 [2024-10-28 15:29:28.400875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.566 [2024-10-28 15:29:28.423055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f8618 00:33:41.566 [2024-10-28 15:29:28.426769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.566 [2024-10-28 15:29:28.426846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:41.827 [2024-10-28 15:29:28.451472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e12d8 00:33:41.827 [2024-10-28 15:29:28.454339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.827 [2024-10-28 15:29:28.454431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:41.827 [2024-10-28 15:29:28.480727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e6738 00:33:41.827 [2024-10-28 15:29:28.483665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.827 [2024-10-28 15:29:28.483742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:41.827 [2024-10-28 15:29:28.516575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166fc128 00:33:41.827 [2024-10-28 15:29:28.521111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.827 [2024-10-28 15:29:28.521186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:41.827 [2024-10-28 15:29:28.537764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166fc128 00:33:41.827 [2024-10-28 15:29:28.539965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.827 [2024-10-28 15:29:28.540049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:41.827 [2024-10-28 15:29:28.573763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e6738 00:33:41.827 [2024-10-28 15:29:28.577587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.827 [2024-10-28 15:29:28.577688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:41.827 [2024-10-28 15:29:28.601449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166eb760 00:33:41.827 [2024-10-28 15:29:28.604885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.827 [2024-10-28 15:29:28.604959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:41.827 [2024-10-28 15:29:28.631008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f8618 00:33:41.827 [2024-10-28 15:29:28.634110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.827 [2024-10-28 15:29:28.634193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:41.827 [2024-10-28 15:29:28.666996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166fa7d8 00:33:41.827 [2024-10-28 15:29:28.671699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.827 [2024-10-28 15:29:28.671783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:41.827 [2024-10-28 15:29:28.688307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166fe2e8 00:33:41.827 [2024-10-28 15:29:28.690763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.827 [2024-10-28 15:29:28.690852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:42.088 [2024-10-28 15:29:28.724855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f0788 00:33:42.088 [2024-10-28 15:29:28.728877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.088 [2024-10-28 15:29:28.728965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:42.088 [2024-10-28 15:29:28.754607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166ef270 00:33:42.088 [2024-10-28 15:29:28.758590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.088 [2024-10-28 15:29:28.758683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:42.088 [2024-10-28 15:29:28.781154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f8e88 00:33:42.088 [2024-10-28 15:29:28.784400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.088 [2024-10-28 15:29:28.784474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:42.088 [2024-10-28 15:29:28.810352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166feb58 00:33:42.088 [2024-10-28 15:29:28.813633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.088 [2024-10-28 15:29:28.813728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:42.088 [2024-10-28 15:29:28.846305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166dfdc0 00:33:42.088 [2024-10-28 15:29:28.851207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.088 [2024-10-28 15:29:28.851282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.088 [2024-10-28 15:29:28.867460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166ed0b0 00:33:42.088 [2024-10-28 15:29:28.869791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.088 [2024-10-28 15:29:28.869864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.088 [2024-10-28 15:29:28.903164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e99d8 00:33:42.088 [2024-10-28 15:29:28.907347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.088 [2024-10-28 15:29:28.907433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:42.088 [2024-10-28 15:29:28.924309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e3d08 00:33:42.088 [2024-10-28 15:29:28.926182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.088 [2024-10-28 15:29:28.926255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:42.347 8926.00 IOPS, 34.87 MiB/s [2024-10-28T14:29:29.214Z] [2024-10-28 15:29:28.964882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166fc128 00:33:42.347 [2024-10-28 15:29:28.968350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.347 [2024-10-28 15:29:28.968440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:42.348 [2024-10-28 15:29:28.992495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e73e0 00:33:42.348 [2024-10-28 15:29:28.995285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.348 [2024-10-28 15:29:28.995360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:42.348 [2024-10-28 15:29:29.021759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f57b0 00:33:42.348 [2024-10-28 15:29:29.024512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.348 [2024-10-28 15:29:29.024597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:42.348 [2024-10-28 15:29:29.055370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f4f40 00:33:42.348 [2024-10-28 15:29:29.058162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.348 [2024-10-28 15:29:29.058236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:42.348 [2024-10-28 15:29:29.071558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f6890 00:33:42.348 [2024-10-28 15:29:29.073074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.348 [2024-10-28 15:29:29.073108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:42.348 [2024-10-28 15:29:29.084630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166dfdc0 00:33:42.348 [2024-10-28 15:29:29.086292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.348 [2024-10-28 15:29:29.086325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:42.348 [2024-10-28 15:29:29.095075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e99d8 00:33:42.348 [2024-10-28 15:29:29.096039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.348 [2024-10-28 15:29:29.096073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:42.348 [2024-10-28 15:29:29.111247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166edd58 00:33:42.348 [2024-10-28 15:29:29.112917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.348 [2024-10-28 15:29:29.112949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:42.348 [2024-10-28 15:29:29.123658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e9168 00:33:42.348 [2024-10-28 15:29:29.124988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.348 [2024-10-28 15:29:29.125022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:42.348 [2024-10-28 15:29:29.136603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f6890 00:33:42.348 [2024-10-28 15:29:29.137730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.348 [2024-10-28 15:29:29.137763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:42.348 [2024-10-28 15:29:29.150353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166ec840 00:33:42.348 [2024-10-28 15:29:29.151821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.348 [2024-10-28 15:29:29.151854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:42.348 [2024-10-28 15:29:29.162511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e5658 00:33:42.348 [2024-10-28 15:29:29.163666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.348 [2024-10-28 15:29:29.163700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:42.348 [2024-10-28 15:29:29.176662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166fc128 00:33:42.348 [2024-10-28 15:29:29.178009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.348 [2024-10-28 15:29:29.178036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.348 [2024-10-28 15:29:29.187187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e4578 00:33:42.348 [2024-10-28 15:29:29.188045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.348 [2024-10-28 15:29:29.188081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:42.348 [2024-10-28 15:29:29.200998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166edd58 00:33:42.348 [2024-10-28 15:29:29.202713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.348 [2024-10-28 15:29:29.202741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:42.348 [2024-10-28 15:29:29.208987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166ee5c8 00:33:42.348 [2024-10-28 15:29:29.209909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.348 [2024-10-28 15:29:29.209940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:42.606 [2024-10-28 15:29:29.223137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f8e88 00:33:42.606 [2024-10-28 15:29:29.224580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.606 [2024-10-28 15:29:29.224610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:42.606 [2024-10-28 15:29:29.233262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166feb58 00:33:42.606 [2024-10-28 15:29:29.234809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.606 [2024-10-28 15:29:29.234838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:42.606 [2024-10-28 15:29:29.243612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166fac10 00:33:42.606 [2024-10-28 15:29:29.244416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.606 [2024-10-28 15:29:29.244442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:42.606 [2024-10-28 15:29:29.255677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166ebfd0 00:33:42.606 [2024-10-28 15:29:29.256745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.606 [2024-10-28 15:29:29.256773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:42.606 [2024-10-28 15:29:29.268178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f7970 00:33:42.606 [2024-10-28 15:29:29.269431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.606 [2024-10-28 15:29:29.269459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:42.606 [2024-10-28 15:29:29.279718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166dfdc0 00:33:42.606 [2024-10-28 15:29:29.281124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.606 [2024-10-28 15:29:29.281151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:42.606 [2024-10-28 15:29:29.290164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e01f8 00:33:42.606 [2024-10-28 15:29:29.291398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.606 [2024-10-28 15:29:29.291424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:42.606 [2024-10-28 15:29:29.303309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166ef6a8 00:33:42.606 [2024-10-28 15:29:29.305147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.606 [2024-10-28 15:29:29.305175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:42.606 [2024-10-28 15:29:29.311350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166fda78 00:33:42.606 [2024-10-28 15:29:29.312252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.606 [2024-10-28 15:29:29.312279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:42.606 [2024-10-28 15:29:29.325280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e0ea0 00:33:42.606 [2024-10-28 15:29:29.326725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.606 [2024-10-28 15:29:29.326754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:42.606 [2024-10-28 15:29:29.336053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f35f0 00:33:42.607 [2024-10-28 15:29:29.337296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.607 [2024-10-28 15:29:29.337324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:42.607 [2024-10-28 15:29:29.346692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166ef270 00:33:42.607 [2024-10-28 15:29:29.347960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.607 [2024-10-28 15:29:29.347987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:42.607 [2024-10-28 15:29:29.360566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e6738 00:33:42.607 [2024-10-28 15:29:29.362419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.607 [2024-10-28 15:29:29.362446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:42.607 [2024-10-28 15:29:29.368531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f4b08 00:33:42.607 [2024-10-28 15:29:29.369393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.607 [2024-10-28 15:29:29.369420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:42.607 [2024-10-28 15:29:29.381968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f3a28 00:33:42.607 [2024-10-28 15:29:29.383294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.607 [2024-10-28 15:29:29.383322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:42.607 [2024-10-28 15:29:29.392520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166efae0 00:33:42.607 [2024-10-28 15:29:29.393809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.607 [2024-10-28 15:29:29.393838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:42.607 [2024-10-28 15:29:29.406101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f5be8 00:33:42.607 [2024-10-28 15:29:29.407667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.607 [2024-10-28 15:29:29.407696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.607 [2024-10-28 15:29:29.416822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f2510 00:33:42.607 [2024-10-28 15:29:29.418333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.607 [2024-10-28 15:29:29.418360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:42.607 [2024-10-28 15:29:29.426566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e88f8 00:33:42.607 [2024-10-28 15:29:29.427428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.607 [2024-10-28 15:29:29.427455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:42.607 [2024-10-28 15:29:29.440360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166fc998 00:33:42.607 [2024-10-28 15:29:29.442129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.607 [2024-10-28 15:29:29.442156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:42.607 [2024-10-28 15:29:29.448561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166fb8b8 00:33:42.607 [2024-10-28 15:29:29.449451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.607 [2024-10-28 15:29:29.449477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:42.607 [2024-10-28 15:29:29.462461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166ef270 00:33:42.607 [2024-10-28 15:29:29.463877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.607 [2024-10-28 15:29:29.463906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:42.865 [2024-10-28 15:29:29.474273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166ebb98 00:33:42.865 [2024-10-28 15:29:29.475817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.865 [2024-10-28 15:29:29.475848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:42.865 [2024-10-28 15:29:29.484218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e38d0 00:33:42.865 [2024-10-28 15:29:29.485727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.865 [2024-10-28 15:29:29.485762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:42.865 [2024-10-28 15:29:29.495994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e4de8 00:33:42.865 [2024-10-28 15:29:29.497668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.865 [2024-10-28 15:29:29.497698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.865 [2024-10-28 15:29:29.505914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166dece0 00:33:42.865 [2024-10-28 15:29:29.506830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.865 [2024-10-28 15:29:29.506860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:42.865 [2024-10-28 15:29:29.517980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166ed920 00:33:42.865 [2024-10-28 15:29:29.519000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.865 [2024-10-28 15:29:29.519028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:42.865 [2024-10-28 15:29:29.531877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e4de8 00:33:42.865 [2024-10-28 15:29:29.533462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.865 [2024-10-28 15:29:29.533498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:42.865 [2024-10-28 15:29:29.543624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f0788 00:33:42.865 [2024-10-28 15:29:29.545421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.865 [2024-10-28 15:29:29.545450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:42.865 [2024-10-28 15:29:29.551724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166fc560 00:33:42.865 [2024-10-28 15:29:29.552624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.865 [2024-10-28 15:29:29.552674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:42.865 [2024-10-28 15:29:29.565430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e2c28 00:33:42.865 [2024-10-28 15:29:29.567003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.865 [2024-10-28 15:29:29.567033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:42.865 [2024-10-28 15:29:29.576184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e6fa8 00:33:42.865 [2024-10-28 15:29:29.577447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.865 [2024-10-28 15:29:29.577476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:42.865 [2024-10-28 15:29:29.587453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166df988 00:33:42.865 [2024-10-28 15:29:29.588691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.865 [2024-10-28 15:29:29.588724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:42.865 [2024-10-28 15:29:29.598787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f4f40 00:33:42.865 [2024-10-28 15:29:29.599540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.865 [2024-10-28 15:29:29.599568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:42.865 [2024-10-28 15:29:29.610502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e6b70 00:33:42.866 [2024-10-28 15:29:29.611440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.866 [2024-10-28 15:29:29.611468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:42.866 [2024-10-28 15:29:29.621162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166efae0 00:33:42.866 [2024-10-28 15:29:29.622750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.866 [2024-10-28 15:29:29.622778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:42.866 [2024-10-28 15:29:29.631596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f2510 00:33:42.866 [2024-10-28 15:29:29.632466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.866 [2024-10-28 15:29:29.632493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:42.866 [2024-10-28 15:29:29.642082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f0788 00:33:42.866 [2024-10-28 15:29:29.642860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.866 [2024-10-28 15:29:29.642891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:42.866 [2024-10-28 15:29:29.654541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166fc560 00:33:42.866 [2024-10-28 15:29:29.655536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.866 [2024-10-28 15:29:29.655563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:42.866 [2024-10-28 15:29:29.666154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166fc128 00:33:42.866 [2024-10-28 15:29:29.667289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.866 [2024-10-28 15:29:29.667315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:42.866 [2024-10-28 15:29:29.677335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166de8a8 00:33:42.866 [2024-10-28 15:29:29.678587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.866 [2024-10-28 15:29:29.678614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:42.866 [2024-10-28 15:29:29.688826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166ec408 00:33:42.866 [2024-10-28 15:29:29.690101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.866 [2024-10-28 15:29:29.690129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:42.866 [2024-10-28 15:29:29.699609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166edd58 00:33:42.866 [2024-10-28 15:29:29.700780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.866 [2024-10-28 15:29:29.700808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:42.866 [2024-10-28 15:29:29.711213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e7c50 00:33:42.866 [2024-10-28 15:29:29.712344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.866 [2024-10-28 15:29:29.712371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:42.866 [2024-10-28 15:29:29.721815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f7538 00:33:42.866 [2024-10-28 15:29:29.722920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.866 [2024-10-28 15:29:29.722949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:43.124 [2024-10-28 15:29:29.733558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f0bc0 00:33:43.124 [2024-10-28 15:29:29.734754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.124 [2024-10-28 15:29:29.734785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:43.124 [2024-10-28 15:29:29.745362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f9f68 00:33:43.124 [2024-10-28 15:29:29.746488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.124 [2024-10-28 15:29:29.746517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:43.124 [2024-10-28 15:29:29.756984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f31b8 00:33:43.124 [2024-10-28 15:29:29.757973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.124 [2024-10-28 15:29:29.758001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:43.124 [2024-10-28 15:29:29.769878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e4578 00:33:43.124 [2024-10-28 15:29:29.771728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.124 [2024-10-28 15:29:29.771756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:43.124 [2024-10-28 15:29:29.777844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166fef90 00:33:43.124 [2024-10-28 15:29:29.778601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.124 [2024-10-28 15:29:29.778642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:43.124 [2024-10-28 15:29:29.789538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f4f40 00:33:43.124 [2024-10-28 15:29:29.790540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.124 [2024-10-28 15:29:29.790567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:43.124 [2024-10-28 15:29:29.800789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f0788 00:33:43.124 [2024-10-28 15:29:29.801929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.124 [2024-10-28 15:29:29.801971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:43.124 [2024-10-28 15:29:29.812122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166ec840 00:33:43.124 [2024-10-28 15:29:29.812843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.124 [2024-10-28 15:29:29.812872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:43.124 [2024-10-28 15:29:29.823455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e49b0 00:33:43.124 [2024-10-28 15:29:29.824451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.124 [2024-10-28 15:29:29.824478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:43.124 [2024-10-28 15:29:29.835462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e6300 00:33:43.124 [2024-10-28 15:29:29.836603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.124 [2024-10-28 15:29:29.836645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:43.124 [2024-10-28 15:29:29.846596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166de8a8 00:33:43.124 [2024-10-28 15:29:29.847905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.124 [2024-10-28 15:29:29.847933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:43.124 [2024-10-28 15:29:29.857045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e3498 00:33:43.124 [2024-10-28 15:29:29.857894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.124 [2024-10-28 15:29:29.857922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:43.124 [2024-10-28 15:29:29.869579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f35f0 00:33:43.124 [2024-10-28 15:29:29.871036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.124 [2024-10-28 15:29:29.871064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:43.124 [2024-10-28 15:29:29.880192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166eff18 00:33:43.124 [2024-10-28 15:29:29.881360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:36 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.124 [2024-10-28 15:29:29.881402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:43.124 [2024-10-28 15:29:29.891542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166ec408 00:33:43.124 [2024-10-28 15:29:29.892557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.124 [2024-10-28 15:29:29.892585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:43.124 [2024-10-28 15:29:29.902016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166f31b8 00:33:43.124 [2024-10-28 15:29:29.902872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.124 [2024-10-28 15:29:29.902899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:43.124 [2024-10-28 15:29:29.913547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e38d0 00:33:43.124 [2024-10-28 15:29:29.914426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.124 [2024-10-28 15:29:29.914453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:43.124 [2024-10-28 15:29:29.925092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166ff3c8 00:33:43.124 [2024-10-28 15:29:29.925813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.124 [2024-10-28 15:29:29.925842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:43.125 [2024-10-28 15:29:29.936625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79b30) with pdu=0x2000166e4140 00:33:43.125 14645.50 IOPS, 57.21 MiB/s [2024-10-28T14:29:29.992Z] [2024-10-28 15:29:29.937631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.125 [2024-10-28 15:29:29.937664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:43.125 00:33:43.125 Latency(us) 00:33:43.125 [2024-10-28T14:29:29.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:43.125 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:43.125 nvme0n1 : 2.01 14638.28 57.18 0.00 0.00 8727.99 2815.62 37476.88 00:33:43.125 [2024-10-28T14:29:29.992Z] =================================================================================================================== 00:33:43.125 [2024-10-28T14:29:29.992Z] Total : 14638.28 57.18 0.00 0.00 8727.99 2815.62 37476.88 00:33:43.125 { 00:33:43.125 "results": [ 00:33:43.125 { 00:33:43.125 "job": "nvme0n1", 00:33:43.125 "core_mask": "0x2", 00:33:43.125 "workload": "randwrite", 00:33:43.125 "status": "finished", 00:33:43.125 "queue_depth": 128, 00:33:43.125 "io_size": 4096, 00:33:43.125 "runtime": 2.00529, 00:33:43.125 "iops": 14638.281744785043, 00:33:43.125 "mibps": 57.180788065566574, 00:33:43.125 "io_failed": 0, 00:33:43.125 "io_timeout": 0, 00:33:43.125 "avg_latency_us": 8727.994395060046, 00:33:43.125 "min_latency_us": 2815.6207407407405, 00:33:43.125 "max_latency_us": 37476.88296296296 00:33:43.125 } 00:33:43.125 ], 00:33:43.125 "core_count": 1 00:33:43.125 } 00:33:43.125 15:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:43.125 15:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:43.125 15:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:43.125 | .driver_specific 00:33:43.125 | .nvme_error 00:33:43.125 | .status_code 00:33:43.125 | .command_transient_transport_error' 00:33:43.125 15:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:43.383 15:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 115 > 0 )) 00:33:43.383 15:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3311381 00:33:43.383 15:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3311381 ']' 00:33:43.644 15:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3311381 00:33:43.644 15:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:33:43.644 15:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:43.644 15:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3311381 00:33:43.644 15:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:43.644 15:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:43.644 15:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3311381' 00:33:43.644 killing process with pid 3311381 00:33:43.644 15:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3311381 00:33:43.644 Received shutdown signal, test time was about 2.000000 seconds 00:33:43.644 00:33:43.644 Latency(us) 00:33:43.644 [2024-10-28T14:29:30.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:43.644 [2024-10-28T14:29:30.511Z] =================================================================================================================== 00:33:43.644 [2024-10-28T14:29:30.511Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:43.644 15:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3311381 00:33:43.905 15:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:43.905 15:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:43.905 15:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:43.905 15:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:43.905 15:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:43.905 15:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3311932 00:33:43.905 15:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:43.905 15:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3311932 /var/tmp/bperf.sock 00:33:43.905 15:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3311932 ']' 00:33:43.905 15:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:43.905 15:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:43.905 15:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:43.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:43.905 15:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:43.905 15:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:43.905 [2024-10-28 15:29:30.676489] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:33:43.905 [2024-10-28 15:29:30.676597] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3311932 ] 00:33:43.905 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:43.905 Zero copy mechanism will not be used. 00:33:44.166 [2024-10-28 15:29:30.800760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.166 [2024-10-28 15:29:30.918169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:44.425 15:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:44.425 15:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:33:44.425 15:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:44.425 15:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:44.683 15:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:44.683 15:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.683 15:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:44.683 15:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.683 15:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:44.683 15:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:45.249 nvme0n1 00:33:45.249 15:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:45.249 15:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.249 15:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:45.249 15:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.249 15:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:45.249 15:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:45.510 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:45.510 Zero copy mechanism will not be used. 00:33:45.510 Running I/O for 2 seconds... 00:33:45.510 [2024-10-28 15:29:32.274920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:45.510 [2024-10-28 15:29:32.275737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.510 [2024-10-28 15:29:32.275829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.510 [2024-10-28 15:29:32.288992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:45.510 [2024-10-28 15:29:32.289779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.510 [2024-10-28 15:29:32.289858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.510 [2024-10-28 15:29:32.303053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:45.510 [2024-10-28 15:29:32.303837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.510 [2024-10-28 15:29:32.303915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.510 [2024-10-28 15:29:32.317112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:45.510 [2024-10-28 15:29:32.317902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.510 [2024-10-28 15:29:32.317979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.510 [2024-10-28 15:29:32.331317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:45.510 [2024-10-28 15:29:32.332094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.510 [2024-10-28 15:29:32.332171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.510 [2024-10-28 15:29:32.345530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:45.510 [2024-10-28 15:29:32.346324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.510 [2024-10-28 15:29:32.346400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.510 [2024-10-28 15:29:32.359701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:45.510 [2024-10-28 15:29:32.360463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.510 [2024-10-28 15:29:32.360539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.510 [2024-10-28 15:29:32.373854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:45.510 [2024-10-28 15:29:32.374637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.510 [2024-10-28 15:29:32.374741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.771 [2024-10-28 15:29:32.388076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:45.771 [2024-10-28 15:29:32.388896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.771 [2024-10-28 15:29:32.388976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.771 [2024-10-28 15:29:32.401901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:45.771 [2024-10-28 15:29:32.402680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.771 [2024-10-28 15:29:32.402758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.771 [2024-10-28 15:29:32.415737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:45.771 [2024-10-28 15:29:32.416313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.771 [2024-10-28 15:29:32.416402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.771 [2024-10-28 15:29:32.429455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:45.771 [2024-10-28 15:29:32.430223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.771 [2024-10-28 15:29:32.430302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.771 [2024-10-28 15:29:32.443358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:45.771 [2024-10-28 15:29:32.443948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.771 [2024-10-28 15:29:32.444046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.771 [2024-10-28 15:29:32.456546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:45.771 [2024-10-28 15:29:32.457298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.771 [2024-10-28 15:29:32.457374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.771 [2024-10-28 15:29:32.469911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:45.771 [2024-10-28 15:29:32.470697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.771 [2024-10-28 15:29:32.470772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.771 [2024-10-28 15:29:32.483136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:45.771 [2024-10-28 15:29:32.483821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.771 [2024-10-28 15:29:32.483863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.771 [2024-10-28 15:29:32.496379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:45.771 [2024-10-28 15:29:32.497214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.771 [2024-10-28 15:29:32.497314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.771 [2024-10-28 15:29:32.508979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:45.771 [2024-10-28 15:29:32.509770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.771 [2024-10-28 15:29:32.509851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.771 [2024-10-28 15:29:32.522234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:45.771 [2024-10-28 15:29:32.523024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.771 [2024-10-28 15:29:32.523100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.771 [2024-10-28 15:29:32.535712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:45.771 [2024-10-28 15:29:32.536437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.771 [2024-10-28 15:29:32.536515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.771 [2024-10-28 15:29:32.548961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:45.772 [2024-10-28 15:29:32.549727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.772 [2024-10-28 15:29:32.549769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.772 [2024-10-28 15:29:32.562139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:45.772 [2024-10-28 15:29:32.562928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.772 [2024-10-28 15:29:32.563004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.772 [2024-10-28 15:29:32.575681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:45.772 [2024-10-28 15:29:32.576437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.772 [2024-10-28 15:29:32.576513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.772 [2024-10-28 15:29:32.588979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:45.772 [2024-10-28 15:29:32.589738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.772 [2024-10-28 15:29:32.589780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.772 [2024-10-28 15:29:32.602101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:45.772 [2024-10-28 15:29:32.602814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.772 [2024-10-28 15:29:32.602857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.772 [2024-10-28 15:29:32.615477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:45.772 [2024-10-28 15:29:32.616248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.772 [2024-10-28 15:29:32.616324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.772 [2024-10-28 15:29:32.629040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:45.772 [2024-10-28 15:29:32.629826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.772 [2024-10-28 15:29:32.629903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.034 [2024-10-28 15:29:32.643293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.034 [2024-10-28 15:29:32.643913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.034 [2024-10-28 15:29:32.643958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.034 [2024-10-28 15:29:32.656898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.034 [2024-10-28 15:29:32.657678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.034 [2024-10-28 15:29:32.657739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.034 [2024-10-28 15:29:32.670424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.034 [2024-10-28 15:29:32.671204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.034 [2024-10-28 15:29:32.671281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.034 [2024-10-28 15:29:32.683736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.034 [2024-10-28 15:29:32.684487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.034 [2024-10-28 15:29:32.684563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.034 [2024-10-28 15:29:32.696839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.034 [2024-10-28 15:29:32.697585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.034 [2024-10-28 15:29:32.697675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.034 [2024-10-28 15:29:32.709882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.034 [2024-10-28 15:29:32.710578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.034 [2024-10-28 15:29:32.710673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.034 [2024-10-28 15:29:32.722892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.034 [2024-10-28 15:29:32.723628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.034 [2024-10-28 15:29:32.723719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.034 [2024-10-28 15:29:32.736144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.034 [2024-10-28 15:29:32.736916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.034 [2024-10-28 15:29:32.736993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.034 [2024-10-28 15:29:32.749467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.034 [2024-10-28 15:29:32.750072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.034 [2024-10-28 15:29:32.750148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.034 [2024-10-28 15:29:32.762854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.034 [2024-10-28 15:29:32.763585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.034 [2024-10-28 15:29:32.763690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.034 [2024-10-28 15:29:32.776274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.034 [2024-10-28 15:29:32.776960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.034 [2024-10-28 15:29:32.777003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.034 [2024-10-28 15:29:32.789566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.034 [2024-10-28 15:29:32.790306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.034 [2024-10-28 15:29:32.790382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.034 [2024-10-28 15:29:32.803029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.034 [2024-10-28 15:29:32.803821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.034 [2024-10-28 15:29:32.803897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.034 [2024-10-28 15:29:32.816671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.034 [2024-10-28 15:29:32.817240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.034 [2024-10-28 15:29:32.817316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.034 [2024-10-28 15:29:32.830092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.034 [2024-10-28 15:29:32.830879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.034 [2024-10-28 15:29:32.830953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.034 [2024-10-28 15:29:32.843645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.034 [2024-10-28 15:29:32.844246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.034 [2024-10-28 15:29:32.844332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.034 [2024-10-28 15:29:32.856992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.034 [2024-10-28 15:29:32.857721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.034 [2024-10-28 15:29:32.857797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.034 [2024-10-28 15:29:32.870411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.034 [2024-10-28 15:29:32.871191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.034 [2024-10-28 15:29:32.871267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.034 [2024-10-28 15:29:32.883872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.034 [2024-10-28 15:29:32.884670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.034 [2024-10-28 15:29:32.884745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.034 [2024-10-28 15:29:32.897744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.034 [2024-10-28 15:29:32.898443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.034 [2024-10-28 15:29:32.898524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.294 [2024-10-28 15:29:32.911760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.294 [2024-10-28 15:29:32.912530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.294 [2024-10-28 15:29:32.912609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.294 [2024-10-28 15:29:32.925189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.294 [2024-10-28 15:29:32.925968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.294 [2024-10-28 15:29:32.926043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.294 [2024-10-28 15:29:32.938790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.294 [2024-10-28 15:29:32.939552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.294 [2024-10-28 15:29:32.939628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.294 [2024-10-28 15:29:32.952169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.294 [2024-10-28 15:29:32.952939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.294 [2024-10-28 15:29:32.953016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.294 [2024-10-28 15:29:32.965774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.294 [2024-10-28 15:29:32.966436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.294 [2024-10-28 15:29:32.966513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.294 [2024-10-28 15:29:32.979128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.294 [2024-10-28 15:29:32.979921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.294 [2024-10-28 15:29:32.979997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.294 [2024-10-28 15:29:32.992718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.294 [2024-10-28 15:29:32.993356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.294 [2024-10-28 15:29:32.993432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.294 [2024-10-28 15:29:33.005864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.294 [2024-10-28 15:29:33.006612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.294 [2024-10-28 15:29:33.006712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.294 [2024-10-28 15:29:33.019498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.294 [2024-10-28 15:29:33.020103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.294 [2024-10-28 15:29:33.020180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.294 [2024-10-28 15:29:33.032921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.294 [2024-10-28 15:29:33.033647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.294 [2024-10-28 15:29:33.033737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.294 [2024-10-28 15:29:33.046268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.294 [2024-10-28 15:29:33.046871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.294 [2024-10-28 15:29:33.046915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.294 [2024-10-28 15:29:33.059161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.294 [2024-10-28 15:29:33.059913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.294 [2024-10-28 15:29:33.059990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.294 [2024-10-28 15:29:33.072547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.294 [2024-10-28 15:29:33.073122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.294 [2024-10-28 15:29:33.073210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.294 [2024-10-28 15:29:33.085980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.294 [2024-10-28 15:29:33.086690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.294 [2024-10-28 15:29:33.086755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.294 [2024-10-28 15:29:33.099173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.294 [2024-10-28 15:29:33.099933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.294 [2024-10-28 15:29:33.100009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.294 [2024-10-28 15:29:33.112734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.294 [2024-10-28 15:29:33.113514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.294 [2024-10-28 15:29:33.113602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.294 [2024-10-28 15:29:33.122856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.294 [2024-10-28 15:29:33.123561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.294 [2024-10-28 15:29:33.123638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.294 [2024-10-28 15:29:33.133497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.294 [2024-10-28 15:29:33.133874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.294 [2024-10-28 15:29:33.133930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.294 [2024-10-28 15:29:33.142369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.294 [2024-10-28 15:29:33.142710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.294 [2024-10-28 15:29:33.142756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.294 [2024-10-28 15:29:33.149040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.294 [2024-10-28 15:29:33.149364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.294 [2024-10-28 15:29:33.149398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.294 [2024-10-28 15:29:33.155625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.294 [2024-10-28 15:29:33.155980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.294 [2024-10-28 15:29:33.156017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.556 [2024-10-28 15:29:33.162420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.556 [2024-10-28 15:29:33.162848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.556 [2024-10-28 15:29:33.162896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.556 [2024-10-28 15:29:33.169135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.556 [2024-10-28 15:29:33.169454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.556 [2024-10-28 15:29:33.169489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.556 [2024-10-28 15:29:33.175944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.556 [2024-10-28 15:29:33.176349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.556 [2024-10-28 15:29:33.176391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.556 [2024-10-28 15:29:33.184159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.556 [2024-10-28 15:29:33.184511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.556 [2024-10-28 15:29:33.184545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.556 [2024-10-28 15:29:33.191325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.556 [2024-10-28 15:29:33.191648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.556 [2024-10-28 15:29:33.191693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.556 [2024-10-28 15:29:33.198159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.556 [2024-10-28 15:29:33.198535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.556 [2024-10-28 15:29:33.198569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.556 [2024-10-28 15:29:33.204965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.556 [2024-10-28 15:29:33.205371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.556 [2024-10-28 15:29:33.205415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.556 [2024-10-28 15:29:33.212187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.556 [2024-10-28 15:29:33.212544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.556 [2024-10-28 15:29:33.212577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.556 [2024-10-28 15:29:33.219194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.556 [2024-10-28 15:29:33.219515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.556 [2024-10-28 15:29:33.219561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.556 [2024-10-28 15:29:33.226070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.556 [2024-10-28 15:29:33.226434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.556 [2024-10-28 15:29:33.226468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.556 [2024-10-28 15:29:33.233017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.556 [2024-10-28 15:29:33.233415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.556 [2024-10-28 15:29:33.233449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.556 [2024-10-28 15:29:33.240359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.556 [2024-10-28 15:29:33.240776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.556 [2024-10-28 15:29:33.240816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.556 [2024-10-28 15:29:33.248845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.556 [2024-10-28 15:29:33.249189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.556 [2024-10-28 15:29:33.249226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.556 [2024-10-28 15:29:33.256556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.556 [2024-10-28 15:29:33.256886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.556 [2024-10-28 15:29:33.256921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.556 2547.00 IOPS, 318.38 MiB/s [2024-10-28T14:29:33.423Z] [2024-10-28 15:29:33.265490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.556 [2024-10-28 15:29:33.265832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.556 [2024-10-28 15:29:33.265867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.557 [2024-10-28 15:29:33.272448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.557 [2024-10-28 15:29:33.272781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.557 [2024-10-28 15:29:33.272814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.557 [2024-10-28 15:29:33.279246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.557 [2024-10-28 15:29:33.279600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.557 [2024-10-28 15:29:33.279642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.557 [2024-10-28 15:29:33.285937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.557 [2024-10-28 15:29:33.286290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.557 [2024-10-28 15:29:33.286324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.557 [2024-10-28 15:29:33.292717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.557 [2024-10-28 15:29:33.293069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.557 [2024-10-28 15:29:33.293102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.557 [2024-10-28 15:29:33.299474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.557 [2024-10-28 15:29:33.299799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.557 [2024-10-28 15:29:33.299833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.557 [2024-10-28 15:29:33.306159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.557 [2024-10-28 15:29:33.306571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.557 [2024-10-28 15:29:33.306604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.557 [2024-10-28 15:29:33.313030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.557 [2024-10-28 15:29:33.313346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.557 [2024-10-28 15:29:33.313379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.557 [2024-10-28 15:29:33.319625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.557 [2024-10-28 15:29:33.319954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.557 [2024-10-28 15:29:33.319999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.557 [2024-10-28 15:29:33.326337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.557 [2024-10-28 15:29:33.326659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.557 [2024-10-28 15:29:33.326702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.557 [2024-10-28 15:29:33.332950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.557 [2024-10-28 15:29:33.333341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.557 [2024-10-28 15:29:33.333383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.557 [2024-10-28 15:29:33.339633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.557 [2024-10-28 15:29:33.339958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.557 [2024-10-28 15:29:33.340002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.557 [2024-10-28 15:29:33.346411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.557 [2024-10-28 15:29:33.346738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.557 [2024-10-28 15:29:33.346775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.557 [2024-10-28 15:29:33.353079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.557 [2024-10-28 15:29:33.353441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.557 [2024-10-28 15:29:33.353475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.557 [2024-10-28 15:29:33.359807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.557 [2024-10-28 15:29:33.360160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.557 [2024-10-28 15:29:33.360192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.557 [2024-10-28 15:29:33.366594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.557 [2024-10-28 15:29:33.366922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.557 [2024-10-28 15:29:33.366956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.557 [2024-10-28 15:29:33.373320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.557 [2024-10-28 15:29:33.373695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.557 [2024-10-28 15:29:33.373728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.557 [2024-10-28 15:29:33.380085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.557 [2024-10-28 15:29:33.380448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.557 [2024-10-28 15:29:33.380481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.557 [2024-10-28 15:29:33.386762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.557 [2024-10-28 15:29:33.387080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.557 [2024-10-28 15:29:33.387114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.557 [2024-10-28 15:29:33.394192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.557 [2024-10-28 15:29:33.394557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.557 [2024-10-28 15:29:33.394590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.557 [2024-10-28 15:29:33.401118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.557 [2024-10-28 15:29:33.401449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.557 [2024-10-28 15:29:33.401492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.557 [2024-10-28 15:29:33.407843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.557 [2024-10-28 15:29:33.408212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.557 [2024-10-28 15:29:33.408245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.557 [2024-10-28 15:29:33.417168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.557 [2024-10-28 15:29:33.418030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.557 [2024-10-28 15:29:33.418133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.818 [2024-10-28 15:29:33.431598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.818 [2024-10-28 15:29:33.432146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.818 [2024-10-28 15:29:33.432240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.818 [2024-10-28 15:29:33.444638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.818 [2024-10-28 15:29:33.445155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.818 [2024-10-28 15:29:33.445231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.818 [2024-10-28 15:29:33.458107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.818 [2024-10-28 15:29:33.458548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.818 [2024-10-28 15:29:33.458629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.818 [2024-10-28 15:29:33.468088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.818 [2024-10-28 15:29:33.468792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.818 [2024-10-28 15:29:33.468827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.818 [2024-10-28 15:29:33.479773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.818 [2024-10-28 15:29:33.480166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.818 [2024-10-28 15:29:33.480242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.818 [2024-10-28 15:29:33.492899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.818 [2024-10-28 15:29:33.493450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.818 [2024-10-28 15:29:33.493523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.818 [2024-10-28 15:29:33.505905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.818 [2024-10-28 15:29:33.506429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.818 [2024-10-28 15:29:33.506503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.818 [2024-10-28 15:29:33.519234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.818 [2024-10-28 15:29:33.519740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.818 [2024-10-28 15:29:33.519814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.818 [2024-10-28 15:29:33.532206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.818 [2024-10-28 15:29:33.532747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.818 [2024-10-28 15:29:33.532845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.818 [2024-10-28 15:29:33.545623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.818 [2024-10-28 15:29:33.546148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.818 [2024-10-28 15:29:33.546222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.818 [2024-10-28 15:29:33.558973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.818 [2024-10-28 15:29:33.559475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.818 [2024-10-28 15:29:33.559549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.818 [2024-10-28 15:29:33.572016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.818 [2024-10-28 15:29:33.572453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.819 [2024-10-28 15:29:33.572537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.819 [2024-10-28 15:29:33.585336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.819 [2024-10-28 15:29:33.585901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.819 [2024-10-28 15:29:33.585975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.819 [2024-10-28 15:29:33.598874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.819 [2024-10-28 15:29:33.599400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.819 [2024-10-28 15:29:33.599474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.819 [2024-10-28 15:29:33.612059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.819 [2024-10-28 15:29:33.612492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.819 [2024-10-28 15:29:33.612563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.819 [2024-10-28 15:29:33.625196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.819 [2024-10-28 15:29:33.625663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.819 [2024-10-28 15:29:33.625755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.819 [2024-10-28 15:29:33.638689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.819 [2024-10-28 15:29:33.639213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.819 [2024-10-28 15:29:33.639287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.819 [2024-10-28 15:29:33.652111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.819 [2024-10-28 15:29:33.652674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.819 [2024-10-28 15:29:33.652760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.819 [2024-10-28 15:29:33.665684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.819 [2024-10-28 15:29:33.666199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.819 [2024-10-28 15:29:33.666274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.819 [2024-10-28 15:29:33.678987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:46.819 [2024-10-28 15:29:33.679512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.819 [2024-10-28 15:29:33.679591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.079 [2024-10-28 15:29:33.692489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.079 [2024-10-28 15:29:33.693005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.079 [2024-10-28 15:29:33.693084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.079 [2024-10-28 15:29:33.705685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.079 [2024-10-28 15:29:33.706187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.079 [2024-10-28 15:29:33.706262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.079 [2024-10-28 15:29:33.718625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.079 [2024-10-28 15:29:33.719143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.079 [2024-10-28 15:29:33.719218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.079 [2024-10-28 15:29:33.731925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.079 [2024-10-28 15:29:33.732468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.079 [2024-10-28 15:29:33.732542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.079 [2024-10-28 15:29:33.745386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.079 [2024-10-28 15:29:33.745942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.079 [2024-10-28 15:29:33.746018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.079 [2024-10-28 15:29:33.758788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.079 [2024-10-28 15:29:33.759316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.079 [2024-10-28 15:29:33.759390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.079 [2024-10-28 15:29:33.772255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.079 [2024-10-28 15:29:33.772827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.079 [2024-10-28 15:29:33.772902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.079 [2024-10-28 15:29:33.785567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.079 [2024-10-28 15:29:33.786150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.079 [2024-10-28 15:29:33.786225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.079 [2024-10-28 15:29:33.798924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.079 [2024-10-28 15:29:33.799445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.079 [2024-10-28 15:29:33.799519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.079 [2024-10-28 15:29:33.812210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.079 [2024-10-28 15:29:33.812737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.080 [2024-10-28 15:29:33.812813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.080 [2024-10-28 15:29:33.825777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.080 [2024-10-28 15:29:33.826321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.080 [2024-10-28 15:29:33.826395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.080 [2024-10-28 15:29:33.839368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.080 [2024-10-28 15:29:33.839912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.080 [2024-10-28 15:29:33.839986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.080 [2024-10-28 15:29:33.852589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.080 [2024-10-28 15:29:33.853076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.080 [2024-10-28 15:29:33.853152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.080 [2024-10-28 15:29:33.865841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.080 [2024-10-28 15:29:33.866271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.080 [2024-10-28 15:29:33.866345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.080 [2024-10-28 15:29:33.879136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.080 [2024-10-28 15:29:33.879580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.080 [2024-10-28 15:29:33.879676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.080 [2024-10-28 15:29:33.892254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.080 [2024-10-28 15:29:33.892783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.080 [2024-10-28 15:29:33.892858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.080 [2024-10-28 15:29:33.905369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.080 [2024-10-28 15:29:33.905938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.080 [2024-10-28 15:29:33.906011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.080 [2024-10-28 15:29:33.917471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.080 [2024-10-28 15:29:33.917927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.080 [2024-10-28 15:29:33.917961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.080 [2024-10-28 15:29:33.928619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.080 [2024-10-28 15:29:33.929159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.080 [2024-10-28 15:29:33.929233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.080 [2024-10-28 15:29:33.939807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.080 [2024-10-28 15:29:33.940584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.080 [2024-10-28 15:29:33.940679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.341 [2024-10-28 15:29:33.951731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.341 [2024-10-28 15:29:33.952360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.341 [2024-10-28 15:29:33.952440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.341 [2024-10-28 15:29:33.964778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.341 [2024-10-28 15:29:33.965566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.341 [2024-10-28 15:29:33.965642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.341 [2024-10-28 15:29:33.978119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.341 [2024-10-28 15:29:33.978903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.341 [2024-10-28 15:29:33.978978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.341 [2024-10-28 15:29:33.991722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.341 [2024-10-28 15:29:33.992461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.341 [2024-10-28 15:29:33.992548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.341 [2024-10-28 15:29:34.005046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.341 [2024-10-28 15:29:34.005834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.341 [2024-10-28 15:29:34.005909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.341 [2024-10-28 15:29:34.018509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.341 [2024-10-28 15:29:34.019277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.341 [2024-10-28 15:29:34.019352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.341 [2024-10-28 15:29:34.031887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.341 [2024-10-28 15:29:34.032668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.341 [2024-10-28 15:29:34.032743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.341 [2024-10-28 15:29:34.045372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.341 [2024-10-28 15:29:34.045922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.341 [2024-10-28 15:29:34.046012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.341 [2024-10-28 15:29:34.058620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.341 [2024-10-28 15:29:34.059414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.341 [2024-10-28 15:29:34.059486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.341 [2024-10-28 15:29:34.071992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.341 [2024-10-28 15:29:34.072781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.341 [2024-10-28 15:29:34.072855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.341 [2024-10-28 15:29:34.085355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.341 [2024-10-28 15:29:34.086161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.341 [2024-10-28 15:29:34.086238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.341 [2024-10-28 15:29:34.098902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.341 [2024-10-28 15:29:34.099692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.341 [2024-10-28 15:29:34.099766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.341 [2024-10-28 15:29:34.112406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.341 [2024-10-28 15:29:34.112979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.341 [2024-10-28 15:29:34.113054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.341 [2024-10-28 15:29:34.125748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.341 [2024-10-28 15:29:34.126456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.341 [2024-10-28 15:29:34.126540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.341 [2024-10-28 15:29:34.139106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.341 [2024-10-28 15:29:34.139859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.341 [2024-10-28 15:29:34.139933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.341 [2024-10-28 15:29:34.149716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.341 [2024-10-28 15:29:34.150112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.341 [2024-10-28 15:29:34.150152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.341 [2024-10-28 15:29:34.161844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.341 [2024-10-28 15:29:34.162609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.341 [2024-10-28 15:29:34.162710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.341 [2024-10-28 15:29:34.173523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.341 [2024-10-28 15:29:34.173893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.341 [2024-10-28 15:29:34.173955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.341 [2024-10-28 15:29:34.184797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.341 [2024-10-28 15:29:34.185274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.341 [2024-10-28 15:29:34.185350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.341 [2024-10-28 15:29:34.196400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.341 [2024-10-28 15:29:34.196898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.341 [2024-10-28 15:29:34.196981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.602 [2024-10-28 15:29:34.208228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.602 [2024-10-28 15:29:34.208829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.602 [2024-10-28 15:29:34.208871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.602 [2024-10-28 15:29:34.219957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.602 [2024-10-28 15:29:34.220676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.602 [2024-10-28 15:29:34.220743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.602 [2024-10-28 15:29:34.232564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.602 [2024-10-28 15:29:34.232985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.602 [2024-10-28 15:29:34.233019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.602 [2024-10-28 15:29:34.243713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.602 [2024-10-28 15:29:34.244035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.602 [2024-10-28 15:29:34.244103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.602 [2024-10-28 15:29:34.254762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.602 [2024-10-28 15:29:34.255083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.602 [2024-10-28 15:29:34.255134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.602 2643.00 IOPS, 330.38 MiB/s [2024-10-28T14:29:34.469Z] [2024-10-28 15:29:34.268054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e79d20) with pdu=0x2000166fef90 00:33:47.602 [2024-10-28 15:29:34.268243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.602 [2024-10-28 15:29:34.268275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.602 00:33:47.602 Latency(us) 00:33:47.602 [2024-10-28T14:29:34.469Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.602 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:47.602 nvme0n1 : 2.01 2640.29 330.04 0.00 0.00 6040.92 3094.76 14369.37 00:33:47.602 [2024-10-28T14:29:34.469Z] =================================================================================================================== 00:33:47.602 [2024-10-28T14:29:34.469Z] Total : 2640.29 330.04 0.00 0.00 6040.92 3094.76 14369.37 00:33:47.602 { 00:33:47.602 "results": [ 00:33:47.602 { 00:33:47.602 "job": "nvme0n1", 00:33:47.602 "core_mask": "0x2", 00:33:47.602 "workload": "randwrite", 00:33:47.602 "status": "finished", 00:33:47.602 "queue_depth": 16, 00:33:47.602 "io_size": 131072, 00:33:47.602 "runtime": 2.008116, 00:33:47.602 "iops": 2640.285720546024, 00:33:47.602 "mibps": 330.035715068253, 00:33:47.602 "io_failed": 0, 00:33:47.602 "io_timeout": 0, 00:33:47.602 "avg_latency_us": 6040.917018595359, 00:33:47.602 "min_latency_us": 3094.7555555555555, 00:33:47.602 "max_latency_us": 14369.374814814815 00:33:47.602 } 00:33:47.602 ], 00:33:47.602 "core_count": 1 00:33:47.602 } 00:33:47.602 15:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:47.602 15:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:47.602 15:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:47.602 | .driver_specific 00:33:47.602 | .nvme_error 00:33:47.602 | .status_code 00:33:47.602 | .command_transient_transport_error' 00:33:47.602 15:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:47.861 15:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 171 > 0 )) 00:33:47.861 15:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3311932 00:33:47.861 15:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3311932 ']' 00:33:47.861 15:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3311932 00:33:47.862 15:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:33:47.862 15:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:47.862 15:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3311932 00:33:47.862 15:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:47.862 15:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:47.862 15:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3311932' 00:33:47.862 killing process with pid 3311932 00:33:47.862 15:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3311932 00:33:47.862 Received shutdown signal, test time was about 2.000000 seconds 00:33:47.862 00:33:47.862 Latency(us) 00:33:47.862 [2024-10-28T14:29:34.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.862 [2024-10-28T14:29:34.729Z] =================================================================================================================== 00:33:47.862 [2024-10-28T14:29:34.729Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:47.862 15:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3311932 00:33:48.122 15:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3310168 00:33:48.122 15:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3310168 ']' 00:33:48.122 15:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3310168 00:33:48.122 15:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:33:48.122 15:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:48.122 15:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3310168 00:33:48.382 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:48.382 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:48.382 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3310168' 00:33:48.382 killing process with pid 3310168 00:33:48.382 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3310168 00:33:48.382 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3310168 00:33:48.643 00:33:48.643 real 0m20.147s 00:33:48.643 user 0m41.886s 00:33:48.643 sys 0m5.411s 00:33:48.643 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:48.643 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:48.643 ************************************ 00:33:48.643 END TEST nvmf_digest_error 00:33:48.643 ************************************ 00:33:48.643 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:48.643 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:48.643 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:48.643 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:33:48.643 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:48.643 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:33:48.643 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:48.643 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:48.643 rmmod nvme_tcp 00:33:48.643 rmmod nvme_fabrics 00:33:48.643 rmmod nvme_keyring 00:33:48.643 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:48.643 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:33:48.643 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:33:48.643 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3310168 ']' 00:33:48.643 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3310168 00:33:48.643 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 3310168 ']' 00:33:48.643 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 3310168 00:33:48.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3310168) - No such process 00:33:48.643 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 3310168 is not found' 00:33:48.643 Process with pid 3310168 is not found 00:33:48.643 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:48.643 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:48.643 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:48.643 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:33:48.643 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:33:48.643 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:48.643 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:33:48.643 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:48.643 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:48.643 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:48.643 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:48.643 15:29:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:51.185 00:33:51.185 real 0m47.057s 00:33:51.185 user 1m28.640s 00:33:51.185 sys 0m13.333s 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:51.185 ************************************ 00:33:51.185 END TEST nvmf_digest 00:33:51.185 ************************************ 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.185 ************************************ 00:33:51.185 START TEST nvmf_bdevperf 00:33:51.185 ************************************ 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:51.185 * Looking for test storage... 00:33:51.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1689 -- # lcov --version 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:33:51.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.185 --rc genhtml_branch_coverage=1 00:33:51.185 --rc genhtml_function_coverage=1 00:33:51.185 --rc genhtml_legend=1 00:33:51.185 --rc geninfo_all_blocks=1 00:33:51.185 --rc geninfo_unexecuted_blocks=1 00:33:51.185 00:33:51.185 ' 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:33:51.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.185 --rc genhtml_branch_coverage=1 00:33:51.185 --rc genhtml_function_coverage=1 00:33:51.185 --rc genhtml_legend=1 00:33:51.185 --rc geninfo_all_blocks=1 00:33:51.185 --rc geninfo_unexecuted_blocks=1 00:33:51.185 00:33:51.185 ' 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:33:51.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.185 --rc genhtml_branch_coverage=1 00:33:51.185 --rc genhtml_function_coverage=1 00:33:51.185 --rc genhtml_legend=1 00:33:51.185 --rc geninfo_all_blocks=1 00:33:51.185 --rc geninfo_unexecuted_blocks=1 00:33:51.185 00:33:51.185 ' 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:33:51.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.185 --rc genhtml_branch_coverage=1 00:33:51.185 --rc genhtml_function_coverage=1 00:33:51.185 --rc genhtml_legend=1 00:33:51.185 --rc geninfo_all_blocks=1 00:33:51.185 --rc geninfo_unexecuted_blocks=1 00:33:51.185 00:33:51.185 ' 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:51.185 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:51.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:33:51.186 15:29:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:33:54.481 Found 0000:84:00.0 (0x8086 - 0x159b) 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:33:54.481 Found 0000:84:00.1 (0x8086 - 0x159b) 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:54.481 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:33:54.482 Found net devices under 0000:84:00.0: cvl_0_0 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:33:54.482 Found net devices under 0000:84:00.1: cvl_0_1 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:54.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:54.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:33:54.482 00:33:54.482 --- 10.0.0.2 ping statistics --- 00:33:54.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.482 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:54.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:54.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:33:54.482 00:33:54.482 --- 10.0.0.1 ping statistics --- 00:33:54.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.482 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3314546 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3314546 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3314546 ']' 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:54.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:54.482 15:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.482 [2024-10-28 15:29:40.919495] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:33:54.482 [2024-10-28 15:29:40.919701] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:54.482 [2024-10-28 15:29:41.109892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:54.482 [2024-10-28 15:29:41.233379] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:54.482 [2024-10-28 15:29:41.233496] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:54.482 [2024-10-28 15:29:41.233534] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:54.482 [2024-10-28 15:29:41.233574] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:54.482 [2024-10-28 15:29:41.233589] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:54.482 [2024-10-28 15:29:41.236600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:54.482 [2024-10-28 15:29:41.236712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:54.482 [2024-10-28 15:29:41.236716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.741 [2024-10-28 15:29:41.393508] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.741 Malloc0 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.741 [2024-10-28 15:29:41.468806] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:54.741 { 00:33:54.741 "params": { 00:33:54.741 "name": "Nvme$subsystem", 00:33:54.741 "trtype": "$TEST_TRANSPORT", 00:33:54.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:54.741 "adrfam": "ipv4", 00:33:54.741 "trsvcid": "$NVMF_PORT", 00:33:54.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:54.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:54.741 "hdgst": ${hdgst:-false}, 00:33:54.741 "ddgst": ${ddgst:-false} 00:33:54.741 }, 00:33:54.741 "method": "bdev_nvme_attach_controller" 00:33:54.741 } 00:33:54.741 EOF 00:33:54.741 )") 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:33:54.741 15:29:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:54.741 "params": { 00:33:54.741 "name": "Nvme1", 00:33:54.741 "trtype": "tcp", 00:33:54.741 "traddr": "10.0.0.2", 00:33:54.741 "adrfam": "ipv4", 00:33:54.741 "trsvcid": "4420", 00:33:54.741 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:54.741 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:54.741 "hdgst": false, 00:33:54.741 "ddgst": false 00:33:54.741 }, 00:33:54.741 "method": "bdev_nvme_attach_controller" 00:33:54.741 }' 00:33:54.741 [2024-10-28 15:29:41.521373] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:33:54.741 [2024-10-28 15:29:41.521455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3314688 ] 00:33:54.741 [2024-10-28 15:29:41.600176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:54.999 [2024-10-28 15:29:41.666936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:55.257 Running I/O for 1 seconds... 00:33:56.192 8879.00 IOPS, 34.68 MiB/s 00:33:56.192 Latency(us) 00:33:56.192 [2024-10-28T14:29:43.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.192 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:56.192 Verification LBA range: start 0x0 length 0x4000 00:33:56.192 Nvme1n1 : 1.01 8906.27 34.79 0.00 0.00 14308.24 3046.21 12621.75 00:33:56.192 [2024-10-28T14:29:43.059Z] =================================================================================================================== 00:33:56.192 [2024-10-28T14:29:43.059Z] Total : 8906.27 34.79 0.00 0.00 14308.24 3046.21 12621.75 00:33:56.450 15:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3314833 00:33:56.450 15:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:56.450 15:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:56.450 15:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:56.450 15:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:33:56.450 15:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:33:56.450 15:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:56.450 15:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:56.450 { 00:33:56.450 "params": { 00:33:56.450 "name": "Nvme$subsystem", 00:33:56.450 "trtype": "$TEST_TRANSPORT", 00:33:56.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:56.450 "adrfam": "ipv4", 00:33:56.450 "trsvcid": "$NVMF_PORT", 00:33:56.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:56.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:56.450 "hdgst": ${hdgst:-false}, 00:33:56.450 "ddgst": ${ddgst:-false} 00:33:56.450 }, 00:33:56.450 "method": "bdev_nvme_attach_controller" 00:33:56.450 } 00:33:56.450 EOF 00:33:56.450 )") 00:33:56.450 15:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:33:56.450 15:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:33:56.450 15:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:33:56.450 15:29:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:56.450 "params": { 00:33:56.450 "name": "Nvme1", 00:33:56.450 "trtype": "tcp", 00:33:56.450 "traddr": "10.0.0.2", 00:33:56.450 "adrfam": "ipv4", 00:33:56.450 "trsvcid": "4420", 00:33:56.450 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:56.450 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:56.450 "hdgst": false, 00:33:56.450 "ddgst": false 00:33:56.450 }, 00:33:56.450 "method": "bdev_nvme_attach_controller" 00:33:56.450 }' 00:33:56.450 [2024-10-28 15:29:43.193510] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:33:56.450 [2024-10-28 15:29:43.193601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3314833 ] 00:33:56.450 [2024-10-28 15:29:43.268515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:56.709 [2024-10-28 15:29:43.328068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:56.709 Running I/O for 15 seconds... 00:33:59.018 8762.00 IOPS, 34.23 MiB/s [2024-10-28T14:29:46.456Z] 8826.50 IOPS, 34.48 MiB/s [2024-10-28T14:29:46.456Z] 15:29:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3314546 00:33:59.589 15:29:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:59.589 [2024-10-28 15:29:46.162182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:54624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.589 [2024-10-28 15:29:46.162293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.589 [2024-10-28 15:29:46.162365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:54632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.589 [2024-10-28 15:29:46.162401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.589 [2024-10-28 15:29:46.162426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:54640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.589 [2024-10-28 15:29:46.162448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.589 [2024-10-28 15:29:46.162471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:54648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.589 [2024-10-28 15:29:46.162490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.589 [2024-10-28 15:29:46.162516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:54656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.589 [2024-10-28 15:29:46.162538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.589 [2024-10-28 15:29:46.162561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:54664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.589 [2024-10-28 15:29:46.162584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.589 [2024-10-28 15:29:46.162605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.589 [2024-10-28 15:29:46.162624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.589 [2024-10-28 15:29:46.162647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:54680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.589 [2024-10-28 15:29:46.162737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.589 [2024-10-28 15:29:46.162755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:54688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.589 [2024-10-28 15:29:46.162772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.589 [2024-10-28 15:29:46.162793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:54696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.589 [2024-10-28 15:29:46.162811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.589 [2024-10-28 15:29:46.162831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:54704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.590 [2024-10-28 15:29:46.162846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.162865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:54712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.590 [2024-10-28 15:29:46.162881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.162902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:54720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.590 [2024-10-28 15:29:46.162919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.162937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:54728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.590 [2024-10-28 15:29:46.162969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.162987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:54736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.590 [2024-10-28 15:29:46.163006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.163048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.163071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.163092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.163112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.163133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:55400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.163153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.163174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.163193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.163214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.163233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.163254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.163277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.163300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:55432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.163319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.163340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.163358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.163379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.163398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.163419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.163437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.163458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.163476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.163496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.163514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.163534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.163552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.163573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.163590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.163611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.163629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.163889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.163911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.163943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.163957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.163973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.163986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.164026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.164061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.164084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.164103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.164124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.164142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.164163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.164181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.164202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.164221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.164241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.164260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.164280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.164299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.164320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.164338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.164359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.164378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.164399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.164417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.164438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.164456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.164477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.164496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.164516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.164540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.164561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.164580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.164601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.590 [2024-10-28 15:29:46.164619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.164640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:54744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.590 [2024-10-28 15:29:46.164668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.164705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:54752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.590 [2024-10-28 15:29:46.164726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.164742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:54760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.590 [2024-10-28 15:29:46.164756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.164772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:54768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.590 [2024-10-28 15:29:46.164787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.590 [2024-10-28 15:29:46.164802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:54776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.164817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.164832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:54784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.164846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.164861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:54792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.164875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.164890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.164904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.164919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:54808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.164948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.164971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:54816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.164989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.165016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:54824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.165036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.165057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:54832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.165075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.165097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.165115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.165136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:54848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.165155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.165176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:54856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.165194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.165215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:54864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.165233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.165254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:54872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.165273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.165294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:54880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.165319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.165341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.165360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.165381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:54896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.165399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.165419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:54904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.165438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.165459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:54912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.165477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.165498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:54920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.165517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.165543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:54928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.165563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.165584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:54936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.165602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.165624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:54944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.165642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.165672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:54952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.165706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.165723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:54960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.165737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.165752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:54968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.165766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.165781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:54976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.165794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.165809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:54984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.165823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.165838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:54992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.165852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.165867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:55000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.165881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.165896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:55008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.165916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.165949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:55016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.165969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.165989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:55024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.166017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.166039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:55032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.166058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.166079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:55040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.166097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.166119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:55048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.166137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.166158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:55056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.166176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.166197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:55064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.166215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.166237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:55072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.166255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.166276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:55080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.166294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.166315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:55088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.166333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.166354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:55096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.166372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.166393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:55104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.591 [2024-10-28 15:29:46.166412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.591 [2024-10-28 15:29:46.166434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:55112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.166452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.166473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:55120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.166492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.166523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:55128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.166542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.166563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.166588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.166610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:55144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.166629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.166657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:55152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.166679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.166713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:55160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.166727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.166743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:55168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.166756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.166772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:55176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.166785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.166801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:55184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.166814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.166830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:55192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.166843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.166859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:55200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.166872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.166888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:55208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.166902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.166918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:55216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.166931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.166965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:55224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.166989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.167010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:55232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.167029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.167050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.167069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.167090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:55248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.167108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.167129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:55256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.167147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.167168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:55264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.167188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.167208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:55272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.167227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.167248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:55280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.167266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.167287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:55288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.167305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.167326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:55296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.167345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.167366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:55304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.167384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.167404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:55312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.167422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.167444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:55320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.167462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.167487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.167506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.167527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:55336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.167546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.167566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:55344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.167584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.167605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:55352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.167623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.167644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:55360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.167672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.167694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:55368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.592 [2024-10-28 15:29:46.167713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.167733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf963b0 is same with the state(6) to be set 00:33:59.592 [2024-10-28 15:29:46.167756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:59.592 [2024-10-28 15:29:46.167772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:59.592 [2024-10-28 15:29:46.167787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55376 len:8 PRP1 0x0 PRP2 0x0 00:33:59.592 [2024-10-28 15:29:46.167804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.167966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:59.592 [2024-10-28 15:29:46.167996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.168017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:59.592 [2024-10-28 15:29:46.168042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.168060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:59.592 [2024-10-28 15:29:46.168077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.168095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:59.592 [2024-10-28 15:29:46.168112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.592 [2024-10-28 15:29:46.168129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:33:59.592 [2024-10-28 15:29:46.172545] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.592 [2024-10-28 15:29:46.172594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:33:59.592 [2024-10-28 15:29:46.173458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.592 [2024-10-28 15:29:46.173500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:33:59.592 [2024-10-28 15:29:46.173522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:33:59.592 [2024-10-28 15:29:46.173828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:33:59.593 [2024-10-28 15:29:46.174127] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.593 [2024-10-28 15:29:46.174158] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.593 [2024-10-28 15:29:46.174181] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.593 [2024-10-28 15:29:46.181323] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.593 [2024-10-28 15:29:46.191561] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.593 [2024-10-28 15:29:46.192188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.593 [2024-10-28 15:29:46.192263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:33:59.593 [2024-10-28 15:29:46.192314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:33:59.593 [2024-10-28 15:29:46.192606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:33:59.593 [2024-10-28 15:29:46.192917] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.593 [2024-10-28 15:29:46.192947] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.593 [2024-10-28 15:29:46.192966] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.593 [2024-10-28 15:29:46.200736] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.593 [2024-10-28 15:29:46.210510] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.593 [2024-10-28 15:29:46.211156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.593 [2024-10-28 15:29:46.211228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:33:59.593 [2024-10-28 15:29:46.211281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:33:59.593 [2024-10-28 15:29:46.211572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:33:59.593 [2024-10-28 15:29:46.211882] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.593 [2024-10-28 15:29:46.211944] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.593 [2024-10-28 15:29:46.211979] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.593 [2024-10-28 15:29:46.219855] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.593 [2024-10-28 15:29:46.229015] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.593 [2024-10-28 15:29:46.229641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.593 [2024-10-28 15:29:46.229742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:33:59.593 [2024-10-28 15:29:46.229799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:33:59.593 [2024-10-28 15:29:46.230092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:33:59.593 [2024-10-28 15:29:46.230388] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.593 [2024-10-28 15:29:46.230416] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.593 [2024-10-28 15:29:46.230464] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.593 [2024-10-28 15:29:46.238238] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.593 [2024-10-28 15:29:46.248006] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.593 [2024-10-28 15:29:46.248605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.593 [2024-10-28 15:29:46.248700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:33:59.593 [2024-10-28 15:29:46.248753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:33:59.593 [2024-10-28 15:29:46.249044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:33:59.593 [2024-10-28 15:29:46.249341] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.593 [2024-10-28 15:29:46.249369] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.593 [2024-10-28 15:29:46.249417] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.593 [2024-10-28 15:29:46.257277] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.593 [2024-10-28 15:29:46.267070] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.593 [2024-10-28 15:29:46.267644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.593 [2024-10-28 15:29:46.267734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:33:59.593 [2024-10-28 15:29:46.267786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:33:59.593 [2024-10-28 15:29:46.268078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:33:59.593 [2024-10-28 15:29:46.268374] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.593 [2024-10-28 15:29:46.268425] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.593 [2024-10-28 15:29:46.268460] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.593 [2024-10-28 15:29:46.276214] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.593 [2024-10-28 15:29:46.285994] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.593 [2024-10-28 15:29:46.286605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.593 [2024-10-28 15:29:46.286700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:33:59.593 [2024-10-28 15:29:46.286750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:33:59.593 [2024-10-28 15:29:46.287054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:33:59.593 [2024-10-28 15:29:46.287352] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.593 [2024-10-28 15:29:46.287401] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.593 [2024-10-28 15:29:46.287436] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.593 [2024-10-28 15:29:46.295203] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.593 [2024-10-28 15:29:46.304995] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.593 [2024-10-28 15:29:46.305595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.593 [2024-10-28 15:29:46.305687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:33:59.593 [2024-10-28 15:29:46.305736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:33:59.593 [2024-10-28 15:29:46.306027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:33:59.593 [2024-10-28 15:29:46.306322] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.593 [2024-10-28 15:29:46.306350] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.593 [2024-10-28 15:29:46.306367] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.593 [2024-10-28 15:29:46.310751] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.593 [2024-10-28 15:29:46.323021] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.593 [2024-10-28 15:29:46.323802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.593 [2024-10-28 15:29:46.323840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:33:59.593 [2024-10-28 15:29:46.323862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:33:59.593 [2024-10-28 15:29:46.324386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:33:59.593 [2024-10-28 15:29:46.324851] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.593 [2024-10-28 15:29:46.324880] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.593 [2024-10-28 15:29:46.324919] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.593 [2024-10-28 15:29:46.332558] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.593 [2024-10-28 15:29:46.342116] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.593 [2024-10-28 15:29:46.342868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.593 [2024-10-28 15:29:46.342937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:33:59.593 [2024-10-28 15:29:46.342977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:33:59.593 [2024-10-28 15:29:46.343511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:33:59.593 [2024-10-28 15:29:46.344076] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.594 [2024-10-28 15:29:46.344141] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.594 [2024-10-28 15:29:46.344177] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.594 [2024-10-28 15:29:46.352243] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.594 [2024-10-28 15:29:46.360840] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.594 [2024-10-28 15:29:46.361563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.594 [2024-10-28 15:29:46.361631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:33:59.594 [2024-10-28 15:29:46.361699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:33:59.594 [2024-10-28 15:29:46.362236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:33:59.594 [2024-10-28 15:29:46.362797] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.594 [2024-10-28 15:29:46.362849] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.594 [2024-10-28 15:29:46.362882] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.594 [2024-10-28 15:29:46.370956] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.594 [2024-10-28 15:29:46.379533] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.594 [2024-10-28 15:29:46.380307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.594 [2024-10-28 15:29:46.380375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:33:59.594 [2024-10-28 15:29:46.380416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:33:59.594 [2024-10-28 15:29:46.380976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:33:59.594 [2024-10-28 15:29:46.381519] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.594 [2024-10-28 15:29:46.381569] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.594 [2024-10-28 15:29:46.381603] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.594 [2024-10-28 15:29:46.389696] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.594 [2024-10-28 15:29:46.398264] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.594 [2024-10-28 15:29:46.399027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.594 [2024-10-28 15:29:46.399095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:33:59.594 [2024-10-28 15:29:46.399135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:33:59.594 [2024-10-28 15:29:46.399690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:33:59.594 [2024-10-28 15:29:46.400233] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.594 [2024-10-28 15:29:46.400285] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.594 [2024-10-28 15:29:46.400320] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.594 [2024-10-28 15:29:46.408476] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.594 [2024-10-28 15:29:46.417078] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.594 [2024-10-28 15:29:46.417856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.594 [2024-10-28 15:29:46.417925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:33:59.594 [2024-10-28 15:29:46.417965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:33:59.594 [2024-10-28 15:29:46.418501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:33:59.594 [2024-10-28 15:29:46.419070] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.594 [2024-10-28 15:29:46.419122] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.594 [2024-10-28 15:29:46.419157] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.594 [2024-10-28 15:29:46.427268] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.594 [2024-10-28 15:29:46.435856] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.594 [2024-10-28 15:29:46.436617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.594 [2024-10-28 15:29:46.436704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:33:59.594 [2024-10-28 15:29:46.436748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:33:59.594 [2024-10-28 15:29:46.437281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:33:59.594 [2024-10-28 15:29:46.437842] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.594 [2024-10-28 15:29:46.437895] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.594 [2024-10-28 15:29:46.437929] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.594 [2024-10-28 15:29:46.446071] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.856 [2024-10-28 15:29:46.454961] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.856 [2024-10-28 15:29:46.455734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.856 [2024-10-28 15:29:46.455806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:33:59.856 [2024-10-28 15:29:46.455848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:33:59.856 [2024-10-28 15:29:46.456386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:33:59.856 [2024-10-28 15:29:46.456952] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.856 [2024-10-28 15:29:46.457004] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.856 [2024-10-28 15:29:46.457039] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.856 [2024-10-28 15:29:46.465255] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.856 [2024-10-28 15:29:46.473851] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.856 [2024-10-28 15:29:46.474622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.856 [2024-10-28 15:29:46.474727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:33:59.856 [2024-10-28 15:29:46.474772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:33:59.856 [2024-10-28 15:29:46.475306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:33:59.856 [2024-10-28 15:29:46.475874] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.856 [2024-10-28 15:29:46.475928] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.856 [2024-10-28 15:29:46.475963] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.856 [2024-10-28 15:29:46.484034] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.856 [2024-10-28 15:29:46.492484] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.856 [2024-10-28 15:29:46.493280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.856 [2024-10-28 15:29:46.493348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:33:59.856 [2024-10-28 15:29:46.493387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:33:59.856 [2024-10-28 15:29:46.493946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:33:59.856 [2024-10-28 15:29:46.494487] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.856 [2024-10-28 15:29:46.494537] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.856 [2024-10-28 15:29:46.494572] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.856 [2024-10-28 15:29:46.502635] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.856 [2024-10-28 15:29:46.510977] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.856 [2024-10-28 15:29:46.511754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.856 [2024-10-28 15:29:46.511823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:33:59.856 [2024-10-28 15:29:46.511863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:33:59.856 [2024-10-28 15:29:46.512398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:33:59.856 [2024-10-28 15:29:46.512967] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.856 [2024-10-28 15:29:46.513019] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.856 [2024-10-28 15:29:46.513053] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.856 [2024-10-28 15:29:46.521163] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.856 [2024-10-28 15:29:46.529765] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.856 [2024-10-28 15:29:46.530511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.856 [2024-10-28 15:29:46.530579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:33:59.856 [2024-10-28 15:29:46.530619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:33:59.856 [2024-10-28 15:29:46.531189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:33:59.856 [2024-10-28 15:29:46.531749] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.856 [2024-10-28 15:29:46.531800] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.856 [2024-10-28 15:29:46.531834] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.856 7737.33 IOPS, 30.22 MiB/s [2024-10-28T14:29:46.723Z] [2024-10-28 15:29:46.540189] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Unable to perform failover, already in progress. 00:33:59.856 [2024-10-28 15:29:46.544005] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.856 [2024-10-28 15:29:46.558757] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.856 [2024-10-28 15:29:46.559504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.856 [2024-10-28 15:29:46.559573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:33:59.856 [2024-10-28 15:29:46.559613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:33:59.856 [2024-10-28 15:29:46.560167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:33:59.856 [2024-10-28 15:29:46.560728] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.856 [2024-10-28 15:29:46.560778] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.856 [2024-10-28 15:29:46.560812] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.856 [2024-10-28 15:29:46.568881] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.856 [2024-10-28 15:29:46.577447] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.856 [2024-10-28 15:29:46.578196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.856 [2024-10-28 15:29:46.578264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:33:59.856 [2024-10-28 15:29:46.578304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:33:59.856 [2024-10-28 15:29:46.578866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:33:59.856 [2024-10-28 15:29:46.579409] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.856 [2024-10-28 15:29:46.579459] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.856 [2024-10-28 15:29:46.579493] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.856 [2024-10-28 15:29:46.587567] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.856 [2024-10-28 15:29:46.596143] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.856 [2024-10-28 15:29:46.596898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.856 [2024-10-28 15:29:46.596967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:33:59.856 [2024-10-28 15:29:46.597007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:33:59.856 [2024-10-28 15:29:46.597540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:33:59.856 [2024-10-28 15:29:46.598119] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.856 [2024-10-28 15:29:46.598172] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.857 [2024-10-28 15:29:46.598207] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.857 [2024-10-28 15:29:46.606282] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.857 [2024-10-28 15:29:46.614862] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.857 [2024-10-28 15:29:46.615585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.857 [2024-10-28 15:29:46.615668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:33:59.857 [2024-10-28 15:29:46.615713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:33:59.857 [2024-10-28 15:29:46.616247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:33:59.857 [2024-10-28 15:29:46.616807] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.857 [2024-10-28 15:29:46.616857] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.857 [2024-10-28 15:29:46.616893] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.857 [2024-10-28 15:29:46.624995] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.857 [2024-10-28 15:29:46.633562] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.857 [2024-10-28 15:29:46.634335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.857 [2024-10-28 15:29:46.634403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:33:59.857 [2024-10-28 15:29:46.634444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:33:59.857 [2024-10-28 15:29:46.635003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:33:59.857 [2024-10-28 15:29:46.635546] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.857 [2024-10-28 15:29:46.635596] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.857 [2024-10-28 15:29:46.635629] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.857 [2024-10-28 15:29:46.643713] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.857 [2024-10-28 15:29:46.652306] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.857 [2024-10-28 15:29:46.653086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.857 [2024-10-28 15:29:46.653155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:33:59.857 [2024-10-28 15:29:46.653196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:33:59.857 [2024-10-28 15:29:46.653757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:33:59.857 [2024-10-28 15:29:46.654303] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.857 [2024-10-28 15:29:46.654354] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.857 [2024-10-28 15:29:46.654401] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.857 [2024-10-28 15:29:46.662465] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.857 [2024-10-28 15:29:46.671052] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.857 [2024-10-28 15:29:46.671825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.857 [2024-10-28 15:29:46.671893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:33:59.857 [2024-10-28 15:29:46.671932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:33:59.857 [2024-10-28 15:29:46.672465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:33:59.857 [2024-10-28 15:29:46.673033] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.857 [2024-10-28 15:29:46.673086] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.857 [2024-10-28 15:29:46.673120] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.857 [2024-10-28 15:29:46.681184] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.857 [2024-10-28 15:29:46.689771] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.857 [2024-10-28 15:29:46.690528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.857 [2024-10-28 15:29:46.690595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:33:59.857 [2024-10-28 15:29:46.690634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:33:59.857 [2024-10-28 15:29:46.691187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:33:59.857 [2024-10-28 15:29:46.691750] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.857 [2024-10-28 15:29:46.691803] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.857 [2024-10-28 15:29:46.691838] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.857 [2024-10-28 15:29:46.697739] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.857 [2024-10-28 15:29:46.706963] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.857 [2024-10-28 15:29:46.707714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.857 [2024-10-28 15:29:46.707745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:33:59.857 [2024-10-28 15:29:46.707763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:33:59.857 [2024-10-28 15:29:46.708124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:33:59.857 [2024-10-28 15:29:46.708686] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.857 [2024-10-28 15:29:46.708730] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.857 [2024-10-28 15:29:46.708746] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.857 [2024-10-28 15:29:46.715385] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.119 [2024-10-28 15:29:46.724749] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.119 [2024-10-28 15:29:46.725550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.119 [2024-10-28 15:29:46.725628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.119 [2024-10-28 15:29:46.725717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.119 [2024-10-28 15:29:46.726272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.119 [2024-10-28 15:29:46.726838] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.119 [2024-10-28 15:29:46.726900] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.119 [2024-10-28 15:29:46.726946] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.119 [2024-10-28 15:29:46.735047] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.119 [2024-10-28 15:29:46.743618] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.119 [2024-10-28 15:29:46.744431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.119 [2024-10-28 15:29:46.744500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.119 [2024-10-28 15:29:46.744540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.119 [2024-10-28 15:29:46.745101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.119 [2024-10-28 15:29:46.745645] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.119 [2024-10-28 15:29:46.745713] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.119 [2024-10-28 15:29:46.745747] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.119 [2024-10-28 15:29:46.753808] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.119 [2024-10-28 15:29:46.762365] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.119 [2024-10-28 15:29:46.763153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.119 [2024-10-28 15:29:46.763223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.119 [2024-10-28 15:29:46.763263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.119 [2024-10-28 15:29:46.763824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.119 [2024-10-28 15:29:46.764367] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.119 [2024-10-28 15:29:46.764417] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.119 [2024-10-28 15:29:46.764452] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.119 [2024-10-28 15:29:46.772531] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.119 [2024-10-28 15:29:46.781113] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.119 [2024-10-28 15:29:46.781889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.119 [2024-10-28 15:29:46.781958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.119 [2024-10-28 15:29:46.782028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.119 [2024-10-28 15:29:46.782566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.119 [2024-10-28 15:29:46.783134] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.119 [2024-10-28 15:29:46.783197] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.119 [2024-10-28 15:29:46.783234] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.119 [2024-10-28 15:29:46.791304] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.119 [2024-10-28 15:29:46.799890] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.119 [2024-10-28 15:29:46.800673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.119 [2024-10-28 15:29:46.800741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.119 [2024-10-28 15:29:46.800779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.119 [2024-10-28 15:29:46.801311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.119 [2024-10-28 15:29:46.801869] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.119 [2024-10-28 15:29:46.801920] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.119 [2024-10-28 15:29:46.801953] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.119 [2024-10-28 15:29:46.810041] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.119 [2024-10-28 15:29:46.818613] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.119 [2024-10-28 15:29:46.819429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.119 [2024-10-28 15:29:46.819495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.119 [2024-10-28 15:29:46.819533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.119 [2024-10-28 15:29:46.820092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.119 [2024-10-28 15:29:46.820642] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.119 [2024-10-28 15:29:46.820708] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.119 [2024-10-28 15:29:46.820742] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.119 [2024-10-28 15:29:46.828834] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.119 [2024-10-28 15:29:46.837402] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.119 [2024-10-28 15:29:46.838192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.119 [2024-10-28 15:29:46.838260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.119 [2024-10-28 15:29:46.838299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.119 [2024-10-28 15:29:46.838854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.119 [2024-10-28 15:29:46.839415] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.119 [2024-10-28 15:29:46.839467] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.119 [2024-10-28 15:29:46.839502] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.120 [2024-10-28 15:29:46.847580] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.120 [2024-10-28 15:29:46.856134] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.120 [2024-10-28 15:29:46.856919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.120 [2024-10-28 15:29:46.856993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.120 [2024-10-28 15:29:46.857035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.120 [2024-10-28 15:29:46.857568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.120 [2024-10-28 15:29:46.858132] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.120 [2024-10-28 15:29:46.858187] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.120 [2024-10-28 15:29:46.858222] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.120 [2024-10-28 15:29:46.866276] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.120 [2024-10-28 15:29:46.874861] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.120 [2024-10-28 15:29:46.875622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.120 [2024-10-28 15:29:46.875709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.120 [2024-10-28 15:29:46.875762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.120 [2024-10-28 15:29:46.876295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.120 [2024-10-28 15:29:46.876864] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.120 [2024-10-28 15:29:46.876919] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.120 [2024-10-28 15:29:46.876955] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.120 [2024-10-28 15:29:46.885013] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.120 [2024-10-28 15:29:46.893579] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.120 [2024-10-28 15:29:46.894396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.120 [2024-10-28 15:29:46.894467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.120 [2024-10-28 15:29:46.894508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.120 [2024-10-28 15:29:46.895067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.120 [2024-10-28 15:29:46.895616] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.120 [2024-10-28 15:29:46.895689] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.120 [2024-10-28 15:29:46.895725] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.120 [2024-10-28 15:29:46.903833] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.120 [2024-10-28 15:29:46.912446] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.120 [2024-10-28 15:29:46.913261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.120 [2024-10-28 15:29:46.913341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.120 [2024-10-28 15:29:46.913382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.120 [2024-10-28 15:29:46.913944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.120 [2024-10-28 15:29:46.914498] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.120 [2024-10-28 15:29:46.914551] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.120 [2024-10-28 15:29:46.914586] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.120 [2024-10-28 15:29:46.922718] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.120 [2024-10-28 15:29:46.931311] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.120 [2024-10-28 15:29:46.932090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.120 [2024-10-28 15:29:46.932163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.120 [2024-10-28 15:29:46.932203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.120 [2024-10-28 15:29:46.932769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.120 [2024-10-28 15:29:46.933319] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.120 [2024-10-28 15:29:46.933373] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.120 [2024-10-28 15:29:46.933407] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.120 [2024-10-28 15:29:46.941475] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.120 [2024-10-28 15:29:46.950058] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.120 [2024-10-28 15:29:46.950842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.120 [2024-10-28 15:29:46.950914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.120 [2024-10-28 15:29:46.950954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.120 [2024-10-28 15:29:46.951488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.120 [2024-10-28 15:29:46.952061] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.120 [2024-10-28 15:29:46.952117] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.120 [2024-10-28 15:29:46.952152] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.120 [2024-10-28 15:29:46.960218] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.120 [2024-10-28 15:29:46.968823] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.120 [2024-10-28 15:29:46.969633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.120 [2024-10-28 15:29:46.969723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.120 [2024-10-28 15:29:46.969765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.120 [2024-10-28 15:29:46.970298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.120 [2024-10-28 15:29:46.970866] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.120 [2024-10-28 15:29:46.970921] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.120 [2024-10-28 15:29:46.970957] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.120 [2024-10-28 15:29:46.979097] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.381 [2024-10-28 15:29:46.988009] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.381 [2024-10-28 15:29:46.988883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.381 [2024-10-28 15:29:46.988960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.381 [2024-10-28 15:29:46.989003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.381 [2024-10-28 15:29:46.989539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.381 [2024-10-28 15:29:46.990126] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.381 [2024-10-28 15:29:46.990187] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.381 [2024-10-28 15:29:46.990237] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.382 [2024-10-28 15:29:46.998318] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.382 [2024-10-28 15:29:47.006927] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.382 [2024-10-28 15:29:47.007742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.382 [2024-10-28 15:29:47.007818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.382 [2024-10-28 15:29:47.007860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.382 [2024-10-28 15:29:47.008403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.382 [2024-10-28 15:29:47.008971] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.382 [2024-10-28 15:29:47.009026] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.382 [2024-10-28 15:29:47.009061] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.382 [2024-10-28 15:29:47.016841] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.382 [2024-10-28 15:29:47.025951] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.382 [2024-10-28 15:29:47.026766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.382 [2024-10-28 15:29:47.026840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.382 [2024-10-28 15:29:47.026906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.382 [2024-10-28 15:29:47.027444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.382 [2024-10-28 15:29:47.028020] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.382 [2024-10-28 15:29:47.028076] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.382 [2024-10-28 15:29:47.028111] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.382 [2024-10-28 15:29:47.036185] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.382 [2024-10-28 15:29:47.044772] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.382 [2024-10-28 15:29:47.045643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.382 [2024-10-28 15:29:47.045732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.382 [2024-10-28 15:29:47.045774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.382 [2024-10-28 15:29:47.046307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.382 [2024-10-28 15:29:47.046874] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.382 [2024-10-28 15:29:47.046941] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.382 [2024-10-28 15:29:47.046976] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.382 [2024-10-28 15:29:47.055038] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.382 [2024-10-28 15:29:47.063618] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.382 [2024-10-28 15:29:47.064443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.382 [2024-10-28 15:29:47.064514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.382 [2024-10-28 15:29:47.064554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.382 [2024-10-28 15:29:47.065107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.382 [2024-10-28 15:29:47.065670] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.382 [2024-10-28 15:29:47.065734] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.382 [2024-10-28 15:29:47.065770] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.382 [2024-10-28 15:29:47.073842] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.382 [2024-10-28 15:29:47.081839] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.382 [2024-10-28 15:29:47.082633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.382 [2024-10-28 15:29:47.082721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.382 [2024-10-28 15:29:47.082763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.382 [2024-10-28 15:29:47.083296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.382 [2024-10-28 15:29:47.083875] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.382 [2024-10-28 15:29:47.083952] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.382 [2024-10-28 15:29:47.083988] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.382 [2024-10-28 15:29:47.092065] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.382 [2024-10-28 15:29:47.100639] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.382 [2024-10-28 15:29:47.101419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.382 [2024-10-28 15:29:47.101490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.382 [2024-10-28 15:29:47.101531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.382 [2024-10-28 15:29:47.102089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.382 [2024-10-28 15:29:47.102638] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.382 [2024-10-28 15:29:47.102709] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.382 [2024-10-28 15:29:47.102744] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.382 [2024-10-28 15:29:47.110879] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.382 [2024-10-28 15:29:47.119446] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.382 [2024-10-28 15:29:47.120244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.382 [2024-10-28 15:29:47.120317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.382 [2024-10-28 15:29:47.120357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.382 [2024-10-28 15:29:47.120915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.382 [2024-10-28 15:29:47.121481] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.382 [2024-10-28 15:29:47.121534] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.382 [2024-10-28 15:29:47.121568] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.382 [2024-10-28 15:29:47.129680] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.382 [2024-10-28 15:29:47.138274] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.382 [2024-10-28 15:29:47.139085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.382 [2024-10-28 15:29:47.139155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.382 [2024-10-28 15:29:47.139196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.382 [2024-10-28 15:29:47.139758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.382 [2024-10-28 15:29:47.140307] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.383 [2024-10-28 15:29:47.140359] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.383 [2024-10-28 15:29:47.140395] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.383 [2024-10-28 15:29:47.148474] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.383 [2024-10-28 15:29:47.157080] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.383 [2024-10-28 15:29:47.157863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.383 [2024-10-28 15:29:47.157933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.383 [2024-10-28 15:29:47.157973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.383 [2024-10-28 15:29:47.158506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.383 [2024-10-28 15:29:47.159069] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.383 [2024-10-28 15:29:47.159123] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.383 [2024-10-28 15:29:47.159157] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.383 [2024-10-28 15:29:47.167224] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.383 [2024-10-28 15:29:47.175807] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.383 [2024-10-28 15:29:47.176557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.383 [2024-10-28 15:29:47.176626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.383 [2024-10-28 15:29:47.176693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.383 [2024-10-28 15:29:47.177229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.383 [2024-10-28 15:29:47.177794] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.383 [2024-10-28 15:29:47.177848] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.383 [2024-10-28 15:29:47.177882] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.383 [2024-10-28 15:29:47.186303] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.383 [2024-10-28 15:29:47.194909] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.383 [2024-10-28 15:29:47.195698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.383 [2024-10-28 15:29:47.195770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.383 [2024-10-28 15:29:47.195810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.383 [2024-10-28 15:29:47.196342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.383 [2024-10-28 15:29:47.196915] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.383 [2024-10-28 15:29:47.196969] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.383 [2024-10-28 15:29:47.197003] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.383 [2024-10-28 15:29:47.205091] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.383 [2024-10-28 15:29:47.213671] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.383 [2024-10-28 15:29:47.214469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.383 [2024-10-28 15:29:47.214538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.383 [2024-10-28 15:29:47.214579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.383 [2024-10-28 15:29:47.215136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.383 [2024-10-28 15:29:47.215708] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.383 [2024-10-28 15:29:47.215763] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.383 [2024-10-28 15:29:47.215798] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.383 [2024-10-28 15:29:47.223896] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.383 [2024-10-28 15:29:47.232478] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.383 [2024-10-28 15:29:47.233307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.383 [2024-10-28 15:29:47.233378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.383 [2024-10-28 15:29:47.233417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.383 [2024-10-28 15:29:47.233973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.383 [2024-10-28 15:29:47.234521] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.383 [2024-10-28 15:29:47.234573] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.383 [2024-10-28 15:29:47.234608] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.383 [2024-10-28 15:29:47.242768] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.644 [2024-10-28 15:29:47.251638] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.644 [2024-10-28 15:29:47.252510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.644 [2024-10-28 15:29:47.252583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.644 [2024-10-28 15:29:47.252625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.645 [2024-10-28 15:29:47.253208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.645 [2024-10-28 15:29:47.253780] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.645 [2024-10-28 15:29:47.253833] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.645 [2024-10-28 15:29:47.253868] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.645 [2024-10-28 15:29:47.262023] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.645 [2024-10-28 15:29:47.270635] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.645 [2024-10-28 15:29:47.271457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.645 [2024-10-28 15:29:47.271530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.645 [2024-10-28 15:29:47.271570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.645 [2024-10-28 15:29:47.272147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.645 [2024-10-28 15:29:47.272715] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.645 [2024-10-28 15:29:47.272769] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.645 [2024-10-28 15:29:47.272803] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.645 [2024-10-28 15:29:47.280869] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.645 [2024-10-28 15:29:47.289432] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.645 [2024-10-28 15:29:47.290255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.645 [2024-10-28 15:29:47.290326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.645 [2024-10-28 15:29:47.290366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.645 [2024-10-28 15:29:47.290928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.645 [2024-10-28 15:29:47.291477] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.645 [2024-10-28 15:29:47.291530] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.645 [2024-10-28 15:29:47.291564] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.645 [2024-10-28 15:29:47.299640] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.645 [2024-10-28 15:29:47.308239] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.645 [2024-10-28 15:29:47.309061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.645 [2024-10-28 15:29:47.309134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.645 [2024-10-28 15:29:47.309175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.645 [2024-10-28 15:29:47.309735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.645 [2024-10-28 15:29:47.310282] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.645 [2024-10-28 15:29:47.310335] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.645 [2024-10-28 15:29:47.310370] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.645 [2024-10-28 15:29:47.318436] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.645 [2024-10-28 15:29:47.327064] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.645 [2024-10-28 15:29:47.327925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.645 [2024-10-28 15:29:47.327996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.645 [2024-10-28 15:29:47.328036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.645 [2024-10-28 15:29:47.328570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.645 [2024-10-28 15:29:47.329143] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.645 [2024-10-28 15:29:47.329209] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.645 [2024-10-28 15:29:47.329247] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.645 [2024-10-28 15:29:47.337309] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.645 [2024-10-28 15:29:47.345905] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.645 [2024-10-28 15:29:47.346677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.645 [2024-10-28 15:29:47.346748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.645 [2024-10-28 15:29:47.346789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.645 [2024-10-28 15:29:47.347324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.645 [2024-10-28 15:29:47.347894] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.645 [2024-10-28 15:29:47.347947] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.645 [2024-10-28 15:29:47.347982] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.645 [2024-10-28 15:29:47.356063] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.645 [2024-10-28 15:29:47.364642] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.645 [2024-10-28 15:29:47.365246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.645 [2024-10-28 15:29:47.365316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.645 [2024-10-28 15:29:47.365356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.645 [2024-10-28 15:29:47.365917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.645 [2024-10-28 15:29:47.366460] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.645 [2024-10-28 15:29:47.366512] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.645 [2024-10-28 15:29:47.366546] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.645 [2024-10-28 15:29:47.374617] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.645 [2024-10-28 15:29:47.383696] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.645 [2024-10-28 15:29:47.384472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.645 [2024-10-28 15:29:47.384543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.645 [2024-10-28 15:29:47.384583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.645 [2024-10-28 15:29:47.385141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.645 [2024-10-28 15:29:47.385706] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.645 [2024-10-28 15:29:47.385760] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.645 [2024-10-28 15:29:47.385793] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.645 [2024-10-28 15:29:47.393880] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.645 [2024-10-28 15:29:47.402448] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.645 [2024-10-28 15:29:47.403282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.645 [2024-10-28 15:29:47.403352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.645 [2024-10-28 15:29:47.403392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.645 [2024-10-28 15:29:47.403961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.645 [2024-10-28 15:29:47.404508] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.645 [2024-10-28 15:29:47.404561] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.645 [2024-10-28 15:29:47.404595] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.646 [2024-10-28 15:29:47.412679] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.646 [2024-10-28 15:29:47.421242] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.646 [2024-10-28 15:29:47.422063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.646 [2024-10-28 15:29:47.422133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.646 [2024-10-28 15:29:47.422173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.646 [2024-10-28 15:29:47.422730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.646 [2024-10-28 15:29:47.423283] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.646 [2024-10-28 15:29:47.423336] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.646 [2024-10-28 15:29:47.423371] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.646 [2024-10-28 15:29:47.431392] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.646 [2024-10-28 15:29:47.439985] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.646 [2024-10-28 15:29:47.440805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.646 [2024-10-28 15:29:47.440880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.646 [2024-10-28 15:29:47.440922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.646 [2024-10-28 15:29:47.441456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.646 [2024-10-28 15:29:47.442028] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.646 [2024-10-28 15:29:47.442081] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.646 [2024-10-28 15:29:47.442115] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.646 [2024-10-28 15:29:47.450173] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.646 [2024-10-28 15:29:47.458758] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.646 [2024-10-28 15:29:47.459578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.646 [2024-10-28 15:29:47.459648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.646 [2024-10-28 15:29:47.459712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.646 [2024-10-28 15:29:47.460248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.646 [2024-10-28 15:29:47.460815] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.646 [2024-10-28 15:29:47.460869] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.646 [2024-10-28 15:29:47.460904] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.646 [2024-10-28 15:29:47.468959] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.646 [2024-10-28 15:29:47.477529] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.646 [2024-10-28 15:29:47.478345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.646 [2024-10-28 15:29:47.478416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.646 [2024-10-28 15:29:47.478456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.646 [2024-10-28 15:29:47.479014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.646 [2024-10-28 15:29:47.479562] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.646 [2024-10-28 15:29:47.479613] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.646 [2024-10-28 15:29:47.479646] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.646 [2024-10-28 15:29:47.487715] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.646 [2024-10-28 15:29:47.496323] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.646 [2024-10-28 15:29:47.497167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.646 [2024-10-28 15:29:47.497237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.646 [2024-10-28 15:29:47.497277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.646 [2024-10-28 15:29:47.497835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.646 [2024-10-28 15:29:47.498382] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.646 [2024-10-28 15:29:47.498434] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.646 [2024-10-28 15:29:47.498470] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.646 [2024-10-28 15:29:47.506649] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.908 [2024-10-28 15:29:47.515493] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.908 [2024-10-28 15:29:47.516369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.908 [2024-10-28 15:29:47.516444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.908 [2024-10-28 15:29:47.516485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.908 [2024-10-28 15:29:47.516881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.908 [2024-10-28 15:29:47.517339] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.908 [2024-10-28 15:29:47.517393] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.908 [2024-10-28 15:29:47.517426] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.908 [2024-10-28 15:29:47.525512] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.908 [2024-10-28 15:29:47.538493] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.908 5803.00 IOPS, 22.67 MiB/s [2024-10-28T14:29:47.775Z] [2024-10-28 15:29:47.539345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.908 [2024-10-28 15:29:47.539417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.908 [2024-10-28 15:29:47.539458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.908 [2024-10-28 15:29:47.540018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.908 [2024-10-28 15:29:47.540562] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.908 [2024-10-28 15:29:47.540619] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.908 [2024-10-28 15:29:47.540669] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.908 [2024-10-28 15:29:47.548733] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.908 [2024-10-28 15:29:47.557290] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.908 [2024-10-28 15:29:47.558094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.908 [2024-10-28 15:29:47.558173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.908 [2024-10-28 15:29:47.558212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.908 [2024-10-28 15:29:47.558771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.908 [2024-10-28 15:29:47.559320] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.908 [2024-10-28 15:29:47.559372] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.908 [2024-10-28 15:29:47.559406] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.908 [2024-10-28 15:29:47.567452] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.908 [2024-10-28 15:29:47.576021] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.908 [2024-10-28 15:29:47.576889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.908 [2024-10-28 15:29:47.576962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.908 [2024-10-28 15:29:47.577002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.908 [2024-10-28 15:29:47.577535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.908 [2024-10-28 15:29:47.578101] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.908 [2024-10-28 15:29:47.578168] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.908 [2024-10-28 15:29:47.578204] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.908 [2024-10-28 15:29:47.586258] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.908 [2024-10-28 15:29:47.594834] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.908 [2024-10-28 15:29:47.595627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.908 [2024-10-28 15:29:47.595713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.908 [2024-10-28 15:29:47.595754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.908 [2024-10-28 15:29:47.596287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.908 [2024-10-28 15:29:47.596853] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.908 [2024-10-28 15:29:47.596907] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.908 [2024-10-28 15:29:47.596942] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.908 [2024-10-28 15:29:47.604986] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.908 [2024-10-28 15:29:47.613548] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.908 [2024-10-28 15:29:47.614381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.908 [2024-10-28 15:29:47.614452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.908 [2024-10-28 15:29:47.614493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.908 [2024-10-28 15:29:47.615047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.908 [2024-10-28 15:29:47.615595] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.908 [2024-10-28 15:29:47.615647] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.908 [2024-10-28 15:29:47.615700] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.908 [2024-10-28 15:29:47.623747] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.908 [2024-10-28 15:29:47.632336] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.908 [2024-10-28 15:29:47.633168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.908 [2024-10-28 15:29:47.633239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.908 [2024-10-28 15:29:47.633278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.908 [2024-10-28 15:29:47.633837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.908 [2024-10-28 15:29:47.634385] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.908 [2024-10-28 15:29:47.634437] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.908 [2024-10-28 15:29:47.634470] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.908 [2024-10-28 15:29:47.642539] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.908 [2024-10-28 15:29:47.651105] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.908 [2024-10-28 15:29:47.651906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.908 [2024-10-28 15:29:47.651977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.908 [2024-10-28 15:29:47.652017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.908 [2024-10-28 15:29:47.652549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.908 [2024-10-28 15:29:47.653114] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.908 [2024-10-28 15:29:47.653167] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.908 [2024-10-28 15:29:47.653200] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.908 [2024-10-28 15:29:47.661250] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.908 [2024-10-28 15:29:47.669812] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.908 [2024-10-28 15:29:47.670638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.909 [2024-10-28 15:29:47.670723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.909 [2024-10-28 15:29:47.670763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.909 [2024-10-28 15:29:47.671295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.909 [2024-10-28 15:29:47.671864] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.909 [2024-10-28 15:29:47.671918] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.909 [2024-10-28 15:29:47.671953] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.909 [2024-10-28 15:29:47.679993] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.909 [2024-10-28 15:29:47.688573] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.909 [2024-10-28 15:29:47.689398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.909 [2024-10-28 15:29:47.689468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.909 [2024-10-28 15:29:47.689508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.909 [2024-10-28 15:29:47.690064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.909 [2024-10-28 15:29:47.690611] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.909 [2024-10-28 15:29:47.690680] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.909 [2024-10-28 15:29:47.690720] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.909 [2024-10-28 15:29:47.698757] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.909 [2024-10-28 15:29:47.707348] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.909 [2024-10-28 15:29:47.708182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.909 [2024-10-28 15:29:47.708254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.909 [2024-10-28 15:29:47.708296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.909 [2024-10-28 15:29:47.708853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.909 [2024-10-28 15:29:47.709402] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.909 [2024-10-28 15:29:47.709454] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.909 [2024-10-28 15:29:47.709489] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.909 [2024-10-28 15:29:47.717307] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.909 [2024-10-28 15:29:47.726361] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.909 [2024-10-28 15:29:47.727224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.909 [2024-10-28 15:29:47.727295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.909 [2024-10-28 15:29:47.727335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.909 [2024-10-28 15:29:47.727891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.909 [2024-10-28 15:29:47.728435] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.909 [2024-10-28 15:29:47.728487] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.909 [2024-10-28 15:29:47.728520] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.909 [2024-10-28 15:29:47.736378] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.909 [2024-10-28 15:29:47.745422] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.909 [2024-10-28 15:29:47.746062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.909 [2024-10-28 15:29:47.746132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.909 [2024-10-28 15:29:47.746172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.909 [2024-10-28 15:29:47.746724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.909 [2024-10-28 15:29:47.747270] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.909 [2024-10-28 15:29:47.747321] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.909 [2024-10-28 15:29:47.747355] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.909 [2024-10-28 15:29:47.755471] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.909 [2024-10-28 15:29:47.764532] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.909 [2024-10-28 15:29:47.765348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.909 [2024-10-28 15:29:47.765418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:00.909 [2024-10-28 15:29:47.765458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:00.909 [2024-10-28 15:29:47.766035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:00.909 [2024-10-28 15:29:47.766578] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.909 [2024-10-28 15:29:47.766631] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.909 [2024-10-28 15:29:47.766686] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.170 [2024-10-28 15:29:47.775047] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.170 [2024-10-28 15:29:47.783742] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.170 [2024-10-28 15:29:47.784592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.170 [2024-10-28 15:29:47.784681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.170 [2024-10-28 15:29:47.784726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.170 [2024-10-28 15:29:47.785260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.170 [2024-10-28 15:29:47.785829] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.170 [2024-10-28 15:29:47.785882] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.170 [2024-10-28 15:29:47.785917] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.170 [2024-10-28 15:29:47.793970] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.170 [2024-10-28 15:29:47.802520] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.170 [2024-10-28 15:29:47.803308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.170 [2024-10-28 15:29:47.803380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.170 [2024-10-28 15:29:47.803420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.170 [2024-10-28 15:29:47.803990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.170 [2024-10-28 15:29:47.804535] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.170 [2024-10-28 15:29:47.804586] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.170 [2024-10-28 15:29:47.804619] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.170 [2024-10-28 15:29:47.812699] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.170 [2024-10-28 15:29:47.821260] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.170 [2024-10-28 15:29:47.822088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.170 [2024-10-28 15:29:47.822158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.170 [2024-10-28 15:29:47.822198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.170 [2024-10-28 15:29:47.822757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.170 [2024-10-28 15:29:47.823310] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.170 [2024-10-28 15:29:47.823375] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.170 [2024-10-28 15:29:47.823411] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.170 [2024-10-28 15:29:47.831497] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.170 [2024-10-28 15:29:47.840065] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.170 [2024-10-28 15:29:47.840857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.170 [2024-10-28 15:29:47.840928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.170 [2024-10-28 15:29:47.840968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.170 [2024-10-28 15:29:47.841500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.170 [2024-10-28 15:29:47.842067] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.170 [2024-10-28 15:29:47.842121] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.170 [2024-10-28 15:29:47.842156] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.170 [2024-10-28 15:29:47.850232] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.170 [2024-10-28 15:29:47.858819] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.170 [2024-10-28 15:29:47.859338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.170 [2024-10-28 15:29:47.859409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.170 [2024-10-28 15:29:47.859450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.170 [2024-10-28 15:29:47.860009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.170 [2024-10-28 15:29:47.860553] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.170 [2024-10-28 15:29:47.860605] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.170 [2024-10-28 15:29:47.860640] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.170 [2024-10-28 15:29:47.868549] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.170 [2024-10-28 15:29:47.877590] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.170 [2024-10-28 15:29:47.878412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.170 [2024-10-28 15:29:47.878483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.170 [2024-10-28 15:29:47.878522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.170 [2024-10-28 15:29:47.879082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.170 [2024-10-28 15:29:47.879626] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.170 [2024-10-28 15:29:47.879697] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.170 [2024-10-28 15:29:47.879733] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.170 [2024-10-28 15:29:47.887808] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.170 [2024-10-28 15:29:47.896407] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.170 [2024-10-28 15:29:47.897250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.170 [2024-10-28 15:29:47.897322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.171 [2024-10-28 15:29:47.897362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.171 [2024-10-28 15:29:47.897922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.171 [2024-10-28 15:29:47.898470] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.171 [2024-10-28 15:29:47.898521] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.171 [2024-10-28 15:29:47.898554] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.171 [2024-10-28 15:29:47.906623] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.171 [2024-10-28 15:29:47.915206] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.171 [2024-10-28 15:29:47.916027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.171 [2024-10-28 15:29:47.916100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.171 [2024-10-28 15:29:47.916140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.171 [2024-10-28 15:29:47.916696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.171 [2024-10-28 15:29:47.917245] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.171 [2024-10-28 15:29:47.917297] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.171 [2024-10-28 15:29:47.917331] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.171 [2024-10-28 15:29:47.925143] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.171 [2024-10-28 15:29:47.934217] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.171 [2024-10-28 15:29:47.935039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.171 [2024-10-28 15:29:47.935110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.171 [2024-10-28 15:29:47.935151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.171 [2024-10-28 15:29:47.935709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.171 [2024-10-28 15:29:47.936263] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.171 [2024-10-28 15:29:47.936315] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.171 [2024-10-28 15:29:47.936349] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.171 [2024-10-28 15:29:47.944405] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.171 [2024-10-28 15:29:47.952967] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.171 [2024-10-28 15:29:47.953746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.171 [2024-10-28 15:29:47.953836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.171 [2024-10-28 15:29:47.953877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.171 [2024-10-28 15:29:47.954411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.171 [2024-10-28 15:29:47.954976] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.171 [2024-10-28 15:29:47.955030] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.171 [2024-10-28 15:29:47.955064] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.171 [2024-10-28 15:29:47.963124] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.171 [2024-10-28 15:29:47.971684] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.171 [2024-10-28 15:29:47.972500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.171 [2024-10-28 15:29:47.972571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.171 [2024-10-28 15:29:47.972612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.171 [2024-10-28 15:29:47.973165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.171 [2024-10-28 15:29:47.973733] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.171 [2024-10-28 15:29:47.973786] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.171 [2024-10-28 15:29:47.973820] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.171 [2024-10-28 15:29:47.981867] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.171 [2024-10-28 15:29:47.990416] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.171 [2024-10-28 15:29:47.991281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.171 [2024-10-28 15:29:47.991351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.171 [2024-10-28 15:29:47.991390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.171 [2024-10-28 15:29:47.991950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.171 [2024-10-28 15:29:47.992497] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.171 [2024-10-28 15:29:47.992549] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.171 [2024-10-28 15:29:47.992582] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.171 [2024-10-28 15:29:48.000633] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.171 [2024-10-28 15:29:48.009306] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.171 [2024-10-28 15:29:48.010102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.171 [2024-10-28 15:29:48.010175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.171 [2024-10-28 15:29:48.010217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.171 [2024-10-28 15:29:48.010787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.171 [2024-10-28 15:29:48.011336] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.171 [2024-10-28 15:29:48.011388] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.171 [2024-10-28 15:29:48.011422] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.171 [2024-10-28 15:29:48.019479] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.171 [2024-10-28 15:29:48.027758] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.171 [2024-10-28 15:29:48.028582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.171 [2024-10-28 15:29:48.028671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.171 [2024-10-28 15:29:48.028723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.171 [2024-10-28 15:29:48.029256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.171 [2024-10-28 15:29:48.029821] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.171 [2024-10-28 15:29:48.029877] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.171 [2024-10-28 15:29:48.029935] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.433 [2024-10-28 15:29:48.038277] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.433 [2024-10-28 15:29:48.046981] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.433 [2024-10-28 15:29:48.047786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.433 [2024-10-28 15:29:48.047862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.433 [2024-10-28 15:29:48.047904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.433 [2024-10-28 15:29:48.048438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.433 [2024-10-28 15:29:48.049012] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.433 [2024-10-28 15:29:48.049067] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.433 [2024-10-28 15:29:48.049102] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.433 [2024-10-28 15:29:48.057165] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.433 [2024-10-28 15:29:48.065738] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.433 [2024-10-28 15:29:48.066511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.433 [2024-10-28 15:29:48.066584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.433 [2024-10-28 15:29:48.066626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.433 [2024-10-28 15:29:48.067194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.433 [2024-10-28 15:29:48.067759] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.433 [2024-10-28 15:29:48.067826] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.433 [2024-10-28 15:29:48.067862] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.433 [2024-10-28 15:29:48.075918] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.433 [2024-10-28 15:29:48.084478] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.433 [2024-10-28 15:29:48.085298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.433 [2024-10-28 15:29:48.085371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.433 [2024-10-28 15:29:48.085412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.433 [2024-10-28 15:29:48.085972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.433 [2024-10-28 15:29:48.086521] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.433 [2024-10-28 15:29:48.086574] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.433 [2024-10-28 15:29:48.086608] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.433 [2024-10-28 15:29:48.094677] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.433 [2024-10-28 15:29:48.103304] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.433 [2024-10-28 15:29:48.104131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.433 [2024-10-28 15:29:48.104205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.433 [2024-10-28 15:29:48.104248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.433 [2024-10-28 15:29:48.104805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.433 [2024-10-28 15:29:48.105356] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.433 [2024-10-28 15:29:48.105409] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.433 [2024-10-28 15:29:48.105445] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.433 [2024-10-28 15:29:48.113514] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.433 [2024-10-28 15:29:48.120682] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.433 [2024-10-28 15:29:48.121201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.433 [2024-10-28 15:29:48.121273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.433 [2024-10-28 15:29:48.121312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.433 [2024-10-28 15:29:48.121785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.433 [2024-10-28 15:29:48.122171] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.433 [2024-10-28 15:29:48.122226] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.433 [2024-10-28 15:29:48.122261] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.433 [2024-10-28 15:29:48.128758] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.433 [2024-10-28 15:29:48.137885] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.433 [2024-10-28 15:29:48.138644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.433 [2024-10-28 15:29:48.138731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.433 [2024-10-28 15:29:48.138773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.433 [2024-10-28 15:29:48.139306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.433 [2024-10-28 15:29:48.139873] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.433 [2024-10-28 15:29:48.139926] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.433 [2024-10-28 15:29:48.139962] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.433 [2024-10-28 15:29:48.148026] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.433 [2024-10-28 15:29:48.156596] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.433 [2024-10-28 15:29:48.157412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.433 [2024-10-28 15:29:48.157495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.433 [2024-10-28 15:29:48.157537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.433 [2024-10-28 15:29:48.158099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.433 [2024-10-28 15:29:48.158648] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.433 [2024-10-28 15:29:48.158719] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.433 [2024-10-28 15:29:48.158753] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.433 [2024-10-28 15:29:48.166857] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.433 [2024-10-28 15:29:48.175474] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.433 [2024-10-28 15:29:48.176235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.433 [2024-10-28 15:29:48.176309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.433 [2024-10-28 15:29:48.176349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.433 [2024-10-28 15:29:48.176906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.433 [2024-10-28 15:29:48.177456] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.433 [2024-10-28 15:29:48.177510] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.433 [2024-10-28 15:29:48.177544] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.433 [2024-10-28 15:29:48.185609] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.433 [2024-10-28 15:29:48.194183] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.433 [2024-10-28 15:29:48.195020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.433 [2024-10-28 15:29:48.195114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.433 [2024-10-28 15:29:48.195156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.433 [2024-10-28 15:29:48.195714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.433 [2024-10-28 15:29:48.196263] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.433 [2024-10-28 15:29:48.196318] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.433 [2024-10-28 15:29:48.196352] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.433 [2024-10-28 15:29:48.204229] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.433 [2024-10-28 15:29:48.213635] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.433 [2024-10-28 15:29:48.214435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.433 [2024-10-28 15:29:48.214507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.433 [2024-10-28 15:29:48.214547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.433 [2024-10-28 15:29:48.215116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.433 [2024-10-28 15:29:48.215686] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.433 [2024-10-28 15:29:48.215741] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.433 [2024-10-28 15:29:48.215778] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.433 [2024-10-28 15:29:48.223834] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.434 [2024-10-28 15:29:48.232433] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.434 [2024-10-28 15:29:48.233254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.434 [2024-10-28 15:29:48.233326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.434 [2024-10-28 15:29:48.233368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.434 [2024-10-28 15:29:48.233924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.434 [2024-10-28 15:29:48.234473] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.434 [2024-10-28 15:29:48.234526] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.434 [2024-10-28 15:29:48.234561] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.434 [2024-10-28 15:29:48.242644] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.434 [2024-10-28 15:29:48.251239] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.434 [2024-10-28 15:29:48.252058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.434 [2024-10-28 15:29:48.252130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.434 [2024-10-28 15:29:48.252171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.434 [2024-10-28 15:29:48.252741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.434 [2024-10-28 15:29:48.253291] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.434 [2024-10-28 15:29:48.253344] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.434 [2024-10-28 15:29:48.253378] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.434 [2024-10-28 15:29:48.261496] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.434 [2024-10-28 15:29:48.270100] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.434 [2024-10-28 15:29:48.270889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.434 [2024-10-28 15:29:48.270962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.434 [2024-10-28 15:29:48.271003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.434 [2024-10-28 15:29:48.271537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.434 [2024-10-28 15:29:48.272109] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.434 [2024-10-28 15:29:48.272165] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.434 [2024-10-28 15:29:48.272199] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.434 [2024-10-28 15:29:48.280266] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.434 [2024-10-28 15:29:48.288350] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.434 [2024-10-28 15:29:48.288823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.434 [2024-10-28 15:29:48.288896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.434 [2024-10-28 15:29:48.288937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.434 [2024-10-28 15:29:48.289470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.434 [2024-10-28 15:29:48.290046] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.434 [2024-10-28 15:29:48.290102] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.434 [2024-10-28 15:29:48.290138] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.695 [2024-10-28 15:29:48.298455] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.695 [2024-10-28 15:29:48.307255] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.695 [2024-10-28 15:29:48.308111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.695 [2024-10-28 15:29:48.308187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.695 [2024-10-28 15:29:48.308228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.695 [2024-10-28 15:29:48.308785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.695 [2024-10-28 15:29:48.309335] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.695 [2024-10-28 15:29:48.309401] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.695 [2024-10-28 15:29:48.309438] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.695 [2024-10-28 15:29:48.317497] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.695 [2024-10-28 15:29:48.326099] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.695 [2024-10-28 15:29:48.326835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.695 [2024-10-28 15:29:48.326908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.695 [2024-10-28 15:29:48.326949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.695 [2024-10-28 15:29:48.327483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.695 [2024-10-28 15:29:48.328047] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.695 [2024-10-28 15:29:48.328102] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.695 [2024-10-28 15:29:48.328136] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.695 [2024-10-28 15:29:48.336240] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.695 [2024-10-28 15:29:48.344836] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.695 [2024-10-28 15:29:48.345629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.695 [2024-10-28 15:29:48.345721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.695 [2024-10-28 15:29:48.345764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.695 [2024-10-28 15:29:48.346299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.695 [2024-10-28 15:29:48.346872] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.696 [2024-10-28 15:29:48.346926] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.696 [2024-10-28 15:29:48.346972] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.696 [2024-10-28 15:29:48.355028] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.696 [2024-10-28 15:29:48.363608] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.696 [2024-10-28 15:29:48.364329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.696 [2024-10-28 15:29:48.364401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.696 [2024-10-28 15:29:48.364442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.696 [2024-10-28 15:29:48.364994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.696 [2024-10-28 15:29:48.365541] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.696 [2024-10-28 15:29:48.365594] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.696 [2024-10-28 15:29:48.365628] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.696 [2024-10-28 15:29:48.373708] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.696 [2024-10-28 15:29:48.382776] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.696 [2024-10-28 15:29:48.383547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.696 [2024-10-28 15:29:48.383620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.696 [2024-10-28 15:29:48.383680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.696 [2024-10-28 15:29:48.384219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.696 [2024-10-28 15:29:48.384782] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.696 [2024-10-28 15:29:48.384836] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.696 [2024-10-28 15:29:48.384870] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.696 [2024-10-28 15:29:48.392480] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.696 [2024-10-28 15:29:48.401769] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.696 [2024-10-28 15:29:48.402595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.696 [2024-10-28 15:29:48.402680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.696 [2024-10-28 15:29:48.402724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.696 [2024-10-28 15:29:48.403258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.696 [2024-10-28 15:29:48.403834] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.696 [2024-10-28 15:29:48.403890] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.696 [2024-10-28 15:29:48.403925] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.696 [2024-10-28 15:29:48.411987] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.696 [2024-10-28 15:29:48.420565] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.696 [2024-10-28 15:29:48.421402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.696 [2024-10-28 15:29:48.421473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.696 [2024-10-28 15:29:48.421514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.696 [2024-10-28 15:29:48.422074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.696 [2024-10-28 15:29:48.422627] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.696 [2024-10-28 15:29:48.422701] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.696 [2024-10-28 15:29:48.422739] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.696 [2024-10-28 15:29:48.430826] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.696 [2024-10-28 15:29:48.439391] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.696 [2024-10-28 15:29:48.440221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.696 [2024-10-28 15:29:48.440305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.696 [2024-10-28 15:29:48.440347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.696 [2024-10-28 15:29:48.440905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.696 [2024-10-28 15:29:48.441454] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.696 [2024-10-28 15:29:48.441507] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.696 [2024-10-28 15:29:48.441540] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.696 [2024-10-28 15:29:48.449606] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.696 [2024-10-28 15:29:48.458362] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.696 [2024-10-28 15:29:48.459172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.696 [2024-10-28 15:29:48.459246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.696 [2024-10-28 15:29:48.459288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.696 [2024-10-28 15:29:48.459848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.696 [2024-10-28 15:29:48.460399] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.696 [2024-10-28 15:29:48.460452] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.696 [2024-10-28 15:29:48.460486] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.696 [2024-10-28 15:29:48.468546] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.696 [2024-10-28 15:29:48.477109] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.696 [2024-10-28 15:29:48.477909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.696 [2024-10-28 15:29:48.477990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.696 [2024-10-28 15:29:48.478029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.696 [2024-10-28 15:29:48.478562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.696 [2024-10-28 15:29:48.479129] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.696 [2024-10-28 15:29:48.479185] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.696 [2024-10-28 15:29:48.479220] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.696 [2024-10-28 15:29:48.487276] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.696 [2024-10-28 15:29:48.495849] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.696 [2024-10-28 15:29:48.496679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.696 [2024-10-28 15:29:48.496760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.696 [2024-10-28 15:29:48.496801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.696 [2024-10-28 15:29:48.497348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.696 [2024-10-28 15:29:48.497922] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.696 [2024-10-28 15:29:48.497976] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.696 [2024-10-28 15:29:48.498011] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.696 [2024-10-28 15:29:48.506076] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.696 [2024-10-28 15:29:48.514634] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.696 [2024-10-28 15:29:48.515451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.696 [2024-10-28 15:29:48.515522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.696 [2024-10-28 15:29:48.515564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.696 [2024-10-28 15:29:48.516121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.696 [2024-10-28 15:29:48.516765] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.696 [2024-10-28 15:29:48.516822] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.696 [2024-10-28 15:29:48.516857] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.696 [2024-10-28 15:29:48.524805] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.696 [2024-10-28 15:29:48.533218] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.696 [2024-10-28 15:29:48.534048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.696 [2024-10-28 15:29:48.534121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.696 [2024-10-28 15:29:48.534161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.696 [2024-10-28 15:29:48.534721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.696 [2024-10-28 15:29:48.535265] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.696 [2024-10-28 15:29:48.535317] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.696 [2024-10-28 15:29:48.535351] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.696 4642.40 IOPS, 18.13 MiB/s [2024-10-28T14:29:48.563Z] [2024-10-28 15:29:48.543646] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Unable to perform failover, already in progress. 00:34:01.697 [2024-10-28 15:29:48.547462] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.957 [2024-10-28 15:29:48.562491] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.957 [2024-10-28 15:29:48.563334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.957 [2024-10-28 15:29:48.563409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.958 [2024-10-28 15:29:48.563451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.958 [2024-10-28 15:29:48.564044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.958 [2024-10-28 15:29:48.564634] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.958 [2024-10-28 15:29:48.564711] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.958 [2024-10-28 15:29:48.564750] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.958 [2024-10-28 15:29:48.572881] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.958 [2024-10-28 15:29:48.581436] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.958 [2024-10-28 15:29:48.582290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.958 [2024-10-28 15:29:48.582366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.958 [2024-10-28 15:29:48.582410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.958 [2024-10-28 15:29:48.582970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.958 [2024-10-28 15:29:48.583518] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.958 [2024-10-28 15:29:48.583572] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.958 [2024-10-28 15:29:48.583606] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.958 [2024-10-28 15:29:48.591672] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.958 [2024-10-28 15:29:48.600221] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.958 [2024-10-28 15:29:48.601069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.958 [2024-10-28 15:29:48.601141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.958 [2024-10-28 15:29:48.601182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.958 [2024-10-28 15:29:48.601740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.958 [2024-10-28 15:29:48.602289] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.958 [2024-10-28 15:29:48.602342] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.958 [2024-10-28 15:29:48.602378] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.958 [2024-10-28 15:29:48.610448] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.958 [2024-10-28 15:29:48.619017] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.958 [2024-10-28 15:29:48.619834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.958 [2024-10-28 15:29:48.619907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.958 [2024-10-28 15:29:48.619949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.958 [2024-10-28 15:29:48.620484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.958 [2024-10-28 15:29:48.621052] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.958 [2024-10-28 15:29:48.621107] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.958 [2024-10-28 15:29:48.621161] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.958 [2024-10-28 15:29:48.629219] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.958 [2024-10-28 15:29:48.637640] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.958 [2024-10-28 15:29:48.638464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.958 [2024-10-28 15:29:48.638547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.958 [2024-10-28 15:29:48.638587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.958 [2024-10-28 15:29:48.639145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.958 [2024-10-28 15:29:48.639711] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.958 [2024-10-28 15:29:48.639767] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.958 [2024-10-28 15:29:48.639801] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.958 [2024-10-28 15:29:48.647861] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.958 [2024-10-28 15:29:48.656437] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.958 [2024-10-28 15:29:48.657237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.958 [2024-10-28 15:29:48.657308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.958 [2024-10-28 15:29:48.657349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.958 [2024-10-28 15:29:48.657905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.958 [2024-10-28 15:29:48.658455] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.958 [2024-10-28 15:29:48.658509] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.958 [2024-10-28 15:29:48.658543] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.958 [2024-10-28 15:29:48.666593] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.958 [2024-10-28 15:29:48.675168] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.958 [2024-10-28 15:29:48.676006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.958 [2024-10-28 15:29:48.676079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.958 [2024-10-28 15:29:48.676122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.958 [2024-10-28 15:29:48.676676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.958 [2024-10-28 15:29:48.677225] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.958 [2024-10-28 15:29:48.677279] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.958 [2024-10-28 15:29:48.677313] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.958 [2024-10-28 15:29:48.685378] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.958 [2024-10-28 15:29:48.693959] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.958 [2024-10-28 15:29:48.694757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.958 [2024-10-28 15:29:48.694829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.958 [2024-10-28 15:29:48.694870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.958 [2024-10-28 15:29:48.695403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.958 [2024-10-28 15:29:48.695973] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.958 [2024-10-28 15:29:48.696029] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.958 [2024-10-28 15:29:48.696066] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.958 [2024-10-28 15:29:48.704153] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.958 [2024-10-28 15:29:48.712753] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.958 [2024-10-28 15:29:48.713570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.958 [2024-10-28 15:29:48.713644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.958 [2024-10-28 15:29:48.713715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.958 [2024-10-28 15:29:48.714250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.958 [2024-10-28 15:29:48.714822] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.958 [2024-10-28 15:29:48.714877] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.958 [2024-10-28 15:29:48.714912] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.958 [2024-10-28 15:29:48.722989] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.958 [2024-10-28 15:29:48.731611] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.958 [2024-10-28 15:29:48.732430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.958 [2024-10-28 15:29:48.732501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.958 [2024-10-28 15:29:48.732542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.958 [2024-10-28 15:29:48.733102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.958 [2024-10-28 15:29:48.733671] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.958 [2024-10-28 15:29:48.733725] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.958 [2024-10-28 15:29:48.733759] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.958 [2024-10-28 15:29:48.741825] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.958 [2024-10-28 15:29:48.750397] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.958 [2024-10-28 15:29:48.751193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.958 [2024-10-28 15:29:48.751264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.958 [2024-10-28 15:29:48.751318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.958 [2024-10-28 15:29:48.751890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.958 [2024-10-28 15:29:48.752445] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.958 [2024-10-28 15:29:48.752499] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.959 [2024-10-28 15:29:48.752534] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.959 [2024-10-28 15:29:48.760600] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.959 [2024-10-28 15:29:48.769187] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.959 [2024-10-28 15:29:48.769992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.959 [2024-10-28 15:29:48.770064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.959 [2024-10-28 15:29:48.770104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.959 [2024-10-28 15:29:48.770637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.959 [2024-10-28 15:29:48.771210] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.959 [2024-10-28 15:29:48.771265] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.959 [2024-10-28 15:29:48.771300] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.959 [2024-10-28 15:29:48.779370] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.959 [2024-10-28 15:29:48.787953] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.959 [2024-10-28 15:29:48.788769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.959 [2024-10-28 15:29:48.788849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.959 [2024-10-28 15:29:48.788890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.959 [2024-10-28 15:29:48.789425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.959 [2024-10-28 15:29:48.789999] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.959 [2024-10-28 15:29:48.790054] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.959 [2024-10-28 15:29:48.790090] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.959 [2024-10-28 15:29:48.798159] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.959 [2024-10-28 15:29:48.806748] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.959 [2024-10-28 15:29:48.807536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.959 [2024-10-28 15:29:48.807607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:01.959 [2024-10-28 15:29:48.807648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:01.959 [2024-10-28 15:29:48.808211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:01.959 [2024-10-28 15:29:48.808796] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.959 [2024-10-28 15:29:48.808852] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.959 [2024-10-28 15:29:48.808886] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.959 [2024-10-28 15:29:48.816958] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.220 [2024-10-28 15:29:48.825851] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.220 [2024-10-28 15:29:48.826699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.220 [2024-10-28 15:29:48.826775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.220 [2024-10-28 15:29:48.826817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.220 [2024-10-28 15:29:48.827398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.220 [2024-10-28 15:29:48.827982] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.220 [2024-10-28 15:29:48.828038] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.220 [2024-10-28 15:29:48.828073] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.220 [2024-10-28 15:29:48.836216] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.220 [2024-10-28 15:29:48.844806] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.220 [2024-10-28 15:29:48.845596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.221 [2024-10-28 15:29:48.845686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.221 [2024-10-28 15:29:48.845732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.221 [2024-10-28 15:29:48.846268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.221 [2024-10-28 15:29:48.846842] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.221 [2024-10-28 15:29:48.846896] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.221 [2024-10-28 15:29:48.846930] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.221 [2024-10-28 15:29:48.855010] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.221 [2024-10-28 15:29:48.863577] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.221 [2024-10-28 15:29:48.864350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.221 [2024-10-28 15:29:48.864422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.221 [2024-10-28 15:29:48.864463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.221 [2024-10-28 15:29:48.865024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.221 [2024-10-28 15:29:48.865570] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.221 [2024-10-28 15:29:48.865623] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.221 [2024-10-28 15:29:48.865681] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.221 [2024-10-28 15:29:48.873781] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.221 [2024-10-28 15:29:48.882358] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.221 [2024-10-28 15:29:48.883192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.221 [2024-10-28 15:29:48.883263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.221 [2024-10-28 15:29:48.883303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.221 [2024-10-28 15:29:48.883866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.221 [2024-10-28 15:29:48.884414] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.221 [2024-10-28 15:29:48.884467] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.221 [2024-10-28 15:29:48.884502] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.221 [2024-10-28 15:29:48.892573] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.221 [2024-10-28 15:29:48.901169] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.221 [2024-10-28 15:29:48.901971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.221 [2024-10-28 15:29:48.902049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.221 [2024-10-28 15:29:48.902090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.221 [2024-10-28 15:29:48.902624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.221 [2024-10-28 15:29:48.903201] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.221 [2024-10-28 15:29:48.903255] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.221 [2024-10-28 15:29:48.903290] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.221 [2024-10-28 15:29:48.911359] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.221 [2024-10-28 15:29:48.919937] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.221 [2024-10-28 15:29:48.920768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.221 [2024-10-28 15:29:48.920847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.221 [2024-10-28 15:29:48.920888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.221 [2024-10-28 15:29:48.921422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.221 [2024-10-28 15:29:48.921999] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.221 [2024-10-28 15:29:48.922054] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.221 [2024-10-28 15:29:48.922088] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.221 [2024-10-28 15:29:48.930161] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.221 [2024-10-28 15:29:48.938810] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.221 [2024-10-28 15:29:48.939680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.221 [2024-10-28 15:29:48.939759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.221 [2024-10-28 15:29:48.939805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.221 [2024-10-28 15:29:48.940339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.221 [2024-10-28 15:29:48.940909] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.221 [2024-10-28 15:29:48.940965] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.221 [2024-10-28 15:29:48.940999] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.221 [2024-10-28 15:29:48.949068] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.221 [2024-10-28 15:29:48.957630] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.221 [2024-10-28 15:29:48.958453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.221 [2024-10-28 15:29:48.958524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.221 [2024-10-28 15:29:48.958565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.221 [2024-10-28 15:29:48.959123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.221 [2024-10-28 15:29:48.959696] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.221 [2024-10-28 15:29:48.959750] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.221 [2024-10-28 15:29:48.959786] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.221 [2024-10-28 15:29:48.967856] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.221 [2024-10-28 15:29:48.976416] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.221 [2024-10-28 15:29:48.977256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.221 [2024-10-28 15:29:48.977334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.221 [2024-10-28 15:29:48.977375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.221 [2024-10-28 15:29:48.977940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.221 [2024-10-28 15:29:48.978488] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.221 [2024-10-28 15:29:48.978542] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.221 [2024-10-28 15:29:48.978576] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.221 [2024-10-28 15:29:48.986643] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.221 [2024-10-28 15:29:48.995231] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.221 [2024-10-28 15:29:48.996079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.221 [2024-10-28 15:29:48.996151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.221 [2024-10-28 15:29:48.996203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.221 [2024-10-28 15:29:48.996767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.221 [2024-10-28 15:29:48.997315] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.222 [2024-10-28 15:29:48.997369] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.222 [2024-10-28 15:29:48.997402] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.222 [2024-10-28 15:29:49.005479] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.222 [2024-10-28 15:29:49.014069] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.222 [2024-10-28 15:29:49.014906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.222 [2024-10-28 15:29:49.014978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.222 [2024-10-28 15:29:49.015019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.222 [2024-10-28 15:29:49.015552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.222 [2024-10-28 15:29:49.016127] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.222 [2024-10-28 15:29:49.016182] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.222 [2024-10-28 15:29:49.016217] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.222 [2024-10-28 15:29:49.024280] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.222 [2024-10-28 15:29:49.032605] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.222 [2024-10-28 15:29:49.033463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.222 [2024-10-28 15:29:49.033536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.222 [2024-10-28 15:29:49.033585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.222 [2024-10-28 15:29:49.034144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.222 [2024-10-28 15:29:49.034709] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.222 [2024-10-28 15:29:49.034765] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.222 [2024-10-28 15:29:49.034801] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.222 [2024-10-28 15:29:49.042877] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.222 [2024-10-28 15:29:49.051441] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.222 [2024-10-28 15:29:49.052284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.222 [2024-10-28 15:29:49.052356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.222 [2024-10-28 15:29:49.052397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.222 [2024-10-28 15:29:49.052960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.222 [2024-10-28 15:29:49.053509] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.222 [2024-10-28 15:29:49.053576] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.222 [2024-10-28 15:29:49.053614] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.222 [2024-10-28 15:29:49.061707] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.222 [2024-10-28 15:29:49.070275] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.222 [2024-10-28 15:29:49.071096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.222 [2024-10-28 15:29:49.071168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.222 [2024-10-28 15:29:49.071209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.222 [2024-10-28 15:29:49.071771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.222 [2024-10-28 15:29:49.072320] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.222 [2024-10-28 15:29:49.072374] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.222 [2024-10-28 15:29:49.072408] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.222 [2024-10-28 15:29:49.080524] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.484 [2024-10-28 15:29:49.089353] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.484 [2024-10-28 15:29:49.090228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.484 [2024-10-28 15:29:49.090311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.484 [2024-10-28 15:29:49.090355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.484 [2024-10-28 15:29:49.090962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.484 [2024-10-28 15:29:49.091514] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.484 [2024-10-28 15:29:49.091577] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.484 [2024-10-28 15:29:49.091614] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.484 [2024-10-28 15:29:49.099732] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.484 [2024-10-28 15:29:49.108324] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.484 [2024-10-28 15:29:49.109144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.484 [2024-10-28 15:29:49.109228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.484 [2024-10-28 15:29:49.109271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.484 [2024-10-28 15:29:49.109832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.484 [2024-10-28 15:29:49.110383] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.484 [2024-10-28 15:29:49.110438] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.484 [2024-10-28 15:29:49.110471] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.484 [2024-10-28 15:29:49.118557] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.484 [2024-10-28 15:29:49.127152] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.484 [2024-10-28 15:29:49.127977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.484 [2024-10-28 15:29:49.128050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.484 [2024-10-28 15:29:49.128091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.484 [2024-10-28 15:29:49.128624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.484 [2024-10-28 15:29:49.129213] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.484 [2024-10-28 15:29:49.129269] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.484 [2024-10-28 15:29:49.129304] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.484 [2024-10-28 15:29:49.137412] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.484 [2024-10-28 15:29:49.146034] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.484 [2024-10-28 15:29:49.146856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.484 [2024-10-28 15:29:49.146930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.484 [2024-10-28 15:29:49.146971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.484 [2024-10-28 15:29:49.147505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.484 [2024-10-28 15:29:49.148081] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.484 [2024-10-28 15:29:49.148135] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.484 [2024-10-28 15:29:49.148170] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.484 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3314546 Killed "${NVMF_APP[@]}" "$@" 00:34:02.484 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:34:02.484 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:02.484 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:02.484 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:02.484 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:02.485 [2024-10-28 15:29:49.156249] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.485 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3315498 00:34:02.485 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:02.485 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3315498 00:34:02.485 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3315498 ']' 00:34:02.485 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:02.485 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:02.485 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:02.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:02.485 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:02.485 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:02.485 [2024-10-28 15:29:49.161773] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.485 [2024-10-28 15:29:49.162219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.485 [2024-10-28 15:29:49.162251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.485 [2024-10-28 15:29:49.162270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.485 [2024-10-28 15:29:49.162507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.485 [2024-10-28 15:29:49.162763] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.485 [2024-10-28 15:29:49.162788] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.485 [2024-10-28 15:29:49.162805] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.485 [2024-10-28 15:29:49.166360] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.485 [2024-10-28 15:29:49.175598] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.485 [2024-10-28 15:29:49.176047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.485 [2024-10-28 15:29:49.176080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.485 [2024-10-28 15:29:49.176098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.485 [2024-10-28 15:29:49.176336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.485 [2024-10-28 15:29:49.176578] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.485 [2024-10-28 15:29:49.176602] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.485 [2024-10-28 15:29:49.176619] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.485 [2024-10-28 15:29:49.180184] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.485 [2024-10-28 15:29:49.189427] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.485 [2024-10-28 15:29:49.189852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.485 [2024-10-28 15:29:49.189884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.485 [2024-10-28 15:29:49.189902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.485 [2024-10-28 15:29:49.190140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.485 [2024-10-28 15:29:49.190381] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.485 [2024-10-28 15:29:49.190405] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.485 [2024-10-28 15:29:49.190421] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.485 [2024-10-28 15:29:49.193987] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.485 [2024-10-28 15:29:49.203438] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.485 [2024-10-28 15:29:49.203804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.485 [2024-10-28 15:29:49.203836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.485 [2024-10-28 15:29:49.203855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.485 [2024-10-28 15:29:49.204094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.485 [2024-10-28 15:29:49.204336] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.485 [2024-10-28 15:29:49.204360] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.485 [2024-10-28 15:29:49.204377] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.485 [2024-10-28 15:29:49.207940] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.485 [2024-10-28 15:29:49.214995] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:34:02.485 [2024-10-28 15:29:49.215066] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:02.485 [2024-10-28 15:29:49.217374] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.485 [2024-10-28 15:29:49.217723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.485 [2024-10-28 15:29:49.217755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.485 [2024-10-28 15:29:49.217773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.485 [2024-10-28 15:29:49.218010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.485 [2024-10-28 15:29:49.218251] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.485 [2024-10-28 15:29:49.218274] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.485 [2024-10-28 15:29:49.218289] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.485 [2024-10-28 15:29:49.221879] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.485 [2024-10-28 15:29:49.231506] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.485 [2024-10-28 15:29:49.231910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.485 [2024-10-28 15:29:49.231942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.485 [2024-10-28 15:29:49.231960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.485 [2024-10-28 15:29:49.232198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.485 [2024-10-28 15:29:49.232439] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.485 [2024-10-28 15:29:49.232463] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.485 [2024-10-28 15:29:49.232477] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.485 [2024-10-28 15:29:49.236061] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.485 [2024-10-28 15:29:49.245511] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.485 [2024-10-28 15:29:49.245931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.485 [2024-10-28 15:29:49.245964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.485 [2024-10-28 15:29:49.245982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.485 [2024-10-28 15:29:49.246218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.485 [2024-10-28 15:29:49.246460] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.485 [2024-10-28 15:29:49.246483] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.485 [2024-10-28 15:29:49.246498] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.485 [2024-10-28 15:29:49.250062] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.485 [2024-10-28 15:29:49.259508] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.485 [2024-10-28 15:29:49.259923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.485 [2024-10-28 15:29:49.259955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.485 [2024-10-28 15:29:49.259973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.485 [2024-10-28 15:29:49.260209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.485 [2024-10-28 15:29:49.260452] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.485 [2024-10-28 15:29:49.260475] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.485 [2024-10-28 15:29:49.260490] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.485 [2024-10-28 15:29:49.267123] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.485 [2024-10-28 15:29:49.278550] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.485 [2024-10-28 15:29:49.279112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.485 [2024-10-28 15:29:49.279183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.485 [2024-10-28 15:29:49.279222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.485 [2024-10-28 15:29:49.279783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.485 [2024-10-28 15:29:49.280327] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.485 [2024-10-28 15:29:49.280379] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.485 [2024-10-28 15:29:49.280414] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.485 [2024-10-28 15:29:49.288229] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.485 [2024-10-28 15:29:49.297289] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.485 [2024-10-28 15:29:49.298090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.486 [2024-10-28 15:29:49.298173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.486 [2024-10-28 15:29:49.298216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.486 [2024-10-28 15:29:49.298773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.486 [2024-10-28 15:29:49.299318] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.486 [2024-10-28 15:29:49.299370] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.486 [2024-10-28 15:29:49.299403] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.486 [2024-10-28 15:29:49.307234] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.486 [2024-10-28 15:29:49.316292] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.486 [2024-10-28 15:29:49.317099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.486 [2024-10-28 15:29:49.317170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.486 [2024-10-28 15:29:49.317211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.486 [2024-10-28 15:29:49.317769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.486 [2024-10-28 15:29:49.318314] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.486 [2024-10-28 15:29:49.318366] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.486 [2024-10-28 15:29:49.318400] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.486 [2024-10-28 15:29:49.326494] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.486 [2024-10-28 15:29:49.334245] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.486 [2024-10-28 15:29:49.335054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.486 [2024-10-28 15:29:49.335125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.486 [2024-10-28 15:29:49.335165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.486 [2024-10-28 15:29:49.335719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.486 [2024-10-28 15:29:49.336265] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.486 [2024-10-28 15:29:49.336318] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.486 [2024-10-28 15:29:49.336352] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.486 [2024-10-28 15:29:49.344289] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.746 [2024-10-28 15:29:49.353615] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.746 [2024-10-28 15:29:49.354512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.746 [2024-10-28 15:29:49.354586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.746 [2024-10-28 15:29:49.354627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.746 [2024-10-28 15:29:49.354939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:02.746 [2024-10-28 15:29:49.354988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.746 [2024-10-28 15:29:49.355568] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.746 [2024-10-28 15:29:49.355624] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.746 [2024-10-28 15:29:49.355681] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.746 [2024-10-28 15:29:49.363522] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.746 [2024-10-28 15:29:49.371718] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.746 [2024-10-28 15:29:49.372498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.746 [2024-10-28 15:29:49.372574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.746 [2024-10-28 15:29:49.372617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.746 [2024-10-28 15:29:49.372939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.746 [2024-10-28 15:29:49.373486] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.746 [2024-10-28 15:29:49.373540] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.746 [2024-10-28 15:29:49.373578] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.746 [2024-10-28 15:29:49.381019] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.746 [2024-10-28 15:29:49.390571] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.746 [2024-10-28 15:29:49.391219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.746 [2024-10-28 15:29:49.391295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.746 [2024-10-28 15:29:49.391352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.746 [2024-10-28 15:29:49.391916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.746 [2024-10-28 15:29:49.392464] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.746 [2024-10-28 15:29:49.392516] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.746 [2024-10-28 15:29:49.392552] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.746 [2024-10-28 15:29:49.400156] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.746 [2024-10-28 15:29:49.409223] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.746 [2024-10-28 15:29:49.410026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.746 [2024-10-28 15:29:49.410098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.746 [2024-10-28 15:29:49.410138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.746 [2024-10-28 15:29:49.410705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.746 [2024-10-28 15:29:49.411249] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.746 [2024-10-28 15:29:49.411322] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.746 [2024-10-28 15:29:49.411359] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.746 [2024-10-28 15:29:49.418981] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.746 [2024-10-28 15:29:49.428053] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.746 [2024-10-28 15:29:49.428842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.746 [2024-10-28 15:29:49.428916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.746 [2024-10-28 15:29:49.428958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.746 [2024-10-28 15:29:49.429502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.746 [2024-10-28 15:29:49.429887] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.746 [2024-10-28 15:29:49.429912] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.746 [2024-10-28 15:29:49.429927] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.746 [2024-10-28 15:29:49.437856] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.746 [2024-10-28 15:29:49.446926] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.746 [2024-10-28 15:29:49.447701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.746 [2024-10-28 15:29:49.447775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.746 [2024-10-28 15:29:49.447816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.746 [2024-10-28 15:29:49.448351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.746 [2024-10-28 15:29:49.448924] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.746 [2024-10-28 15:29:49.448979] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.746 [2024-10-28 15:29:49.449013] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.746 [2024-10-28 15:29:49.457082] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.746 [2024-10-28 15:29:49.465648] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.746 [2024-10-28 15:29:49.466457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.746 [2024-10-28 15:29:49.466527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.746 [2024-10-28 15:29:49.466577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.746 [2024-10-28 15:29:49.466931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.746 [2024-10-28 15:29:49.467468] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.746 [2024-10-28 15:29:49.467521] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.746 [2024-10-28 15:29:49.467557] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.746 [2024-10-28 15:29:49.473085] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:02.746 [2024-10-28 15:29:49.473161] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:02.746 [2024-10-28 15:29:49.473198] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:02.746 [2024-10-28 15:29:49.473228] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:02.746 [2024-10-28 15:29:49.473253] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:02.746 [2024-10-28 15:29:49.474834] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.746 [2024-10-28 15:29:49.476594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:02.746 [2024-10-28 15:29:49.476720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:02.746 [2024-10-28 15:29:49.476769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:02.746 [2024-10-28 15:29:49.480092] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.746 [2024-10-28 15:29:49.480516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.746 [2024-10-28 15:29:49.480549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.746 [2024-10-28 15:29:49.480569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.746 [2024-10-28 15:29:49.480819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.746 [2024-10-28 15:29:49.481074] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.746 [2024-10-28 15:29:49.481098] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.746 [2024-10-28 15:29:49.481116] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.746 [2024-10-28 15:29:49.484699] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.746 [2024-10-28 15:29:49.493948] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.746 [2024-10-28 15:29:49.494459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.746 [2024-10-28 15:29:49.494500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.747 [2024-10-28 15:29:49.494523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.747 [2024-10-28 15:29:49.494779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.747 [2024-10-28 15:29:49.495028] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.747 [2024-10-28 15:29:49.495053] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.747 [2024-10-28 15:29:49.495072] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.747 [2024-10-28 15:29:49.498657] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.747 [2024-10-28 15:29:49.507947] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.747 [2024-10-28 15:29:49.508586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.747 [2024-10-28 15:29:49.508640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.747 [2024-10-28 15:29:49.508674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.747 [2024-10-28 15:29:49.508950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.747 [2024-10-28 15:29:49.509206] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.747 [2024-10-28 15:29:49.509232] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.747 [2024-10-28 15:29:49.509251] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.747 [2024-10-28 15:29:49.512882] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.747 [2024-10-28 15:29:49.521945] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.747 [2024-10-28 15:29:49.522456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.747 [2024-10-28 15:29:49.522497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.747 [2024-10-28 15:29:49.522520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.747 [2024-10-28 15:29:49.522776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.747 [2024-10-28 15:29:49.523029] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.747 [2024-10-28 15:29:49.523054] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.747 [2024-10-28 15:29:49.523073] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.747 [2024-10-28 15:29:49.526637] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.747 3868.67 IOPS, 15.11 MiB/s [2024-10-28T14:29:49.614Z] [2024-10-28 15:29:49.537718] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.747 [2024-10-28 15:29:49.538169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.747 [2024-10-28 15:29:49.538207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.747 [2024-10-28 15:29:49.538229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.747 [2024-10-28 15:29:49.538473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.747 [2024-10-28 15:29:49.538732] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.747 [2024-10-28 15:29:49.538758] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.747 [2024-10-28 15:29:49.538776] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.747 [2024-10-28 15:29:49.542326] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.747 [2024-10-28 15:29:49.551592] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.747 [2024-10-28 15:29:49.552079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.747 [2024-10-28 15:29:49.552121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.747 [2024-10-28 15:29:49.552143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.747 [2024-10-28 15:29:49.552389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.747 [2024-10-28 15:29:49.552648] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.747 [2024-10-28 15:29:49.552696] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.747 [2024-10-28 15:29:49.552716] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.747 [2024-10-28 15:29:49.556268] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.747 [2024-10-28 15:29:49.565503] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.747 [2024-10-28 15:29:49.565961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.747 [2024-10-28 15:29:49.566001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.747 [2024-10-28 15:29:49.566023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.747 [2024-10-28 15:29:49.566269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.747 [2024-10-28 15:29:49.566518] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.747 [2024-10-28 15:29:49.566552] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.747 [2024-10-28 15:29:49.566571] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.747 [2024-10-28 15:29:49.570136] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.747 [2024-10-28 15:29:49.579363] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.747 [2024-10-28 15:29:49.579753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.747 [2024-10-28 15:29:49.579786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.747 [2024-10-28 15:29:49.579805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.747 [2024-10-28 15:29:49.580045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.747 [2024-10-28 15:29:49.580288] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.747 [2024-10-28 15:29:49.580312] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.747 [2024-10-28 15:29:49.580329] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.747 [2024-10-28 15:29:49.583896] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.747 [2024-10-28 15:29:49.593316] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.747 [2024-10-28 15:29:49.593738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.747 [2024-10-28 15:29:49.593770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.747 [2024-10-28 15:29:49.593788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.747 [2024-10-28 15:29:49.594026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.747 [2024-10-28 15:29:49.594269] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.747 [2024-10-28 15:29:49.594293] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.747 [2024-10-28 15:29:49.594310] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:02.747 [2024-10-28 15:29:49.597864] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:02.747 [2024-10-28 15:29:49.607341] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:02.747 [2024-10-28 15:29:49.607789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.747 [2024-10-28 15:29:49.607822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:02.747 [2024-10-28 15:29:49.607845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:02.747 [2024-10-28 15:29:49.608083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:02.747 [2024-10-28 15:29:49.608325] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:02.747 [2024-10-28 15:29:49.608348] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:02.747 [2024-10-28 15:29:49.608364] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:03.008 [2024-10-28 15:29:49.612024] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:03.008 [2024-10-28 15:29:49.621310] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:03.008 [2024-10-28 15:29:49.621751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.008 [2024-10-28 15:29:49.621784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:03.008 [2024-10-28 15:29:49.621803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:03.008 [2024-10-28 15:29:49.622040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:03.008 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:03.008 [2024-10-28 15:29:49.622282] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:03.008 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:34:03.008 [2024-10-28 15:29:49.622306] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:03.008 [2024-10-28 15:29:49.622322] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:03.008 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:03.008 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:03.008 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:03.008 [2024-10-28 15:29:49.625886] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:03.008 [2024-10-28 15:29:49.635346] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:03.008 [2024-10-28 15:29:49.635747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.008 [2024-10-28 15:29:49.635781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:03.008 [2024-10-28 15:29:49.635799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:03.008 [2024-10-28 15:29:49.636037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:03.008 [2024-10-28 15:29:49.636280] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:03.008 [2024-10-28 15:29:49.636303] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:03.008 [2024-10-28 15:29:49.636318] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:03.008 [2024-10-28 15:29:49.639882] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:03.008 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:03.008 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:03.008 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.008 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:03.008 [2024-10-28 15:29:49.649320] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:03.008 [2024-10-28 15:29:49.649754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.008 [2024-10-28 15:29:49.649786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:03.008 [2024-10-28 15:29:49.649805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:03.008 [2024-10-28 15:29:49.650043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:03.008 [2024-10-28 15:29:49.650285] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:03.008 [2024-10-28 15:29:49.650309] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:03.008 [2024-10-28 15:29:49.650324] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:03.008 [2024-10-28 15:29:49.653881] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:03.008 [2024-10-28 15:29:49.653960] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:03.008 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.008 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:03.008 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.008 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:03.008 [2024-10-28 15:29:49.663303] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:03.008 [2024-10-28 15:29:49.663753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.008 [2024-10-28 15:29:49.663784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:03.008 [2024-10-28 15:29:49.663802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:03.008 [2024-10-28 15:29:49.664039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:03.008 [2024-10-28 15:29:49.664281] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:03.008 [2024-10-28 15:29:49.664305] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:03.008 [2024-10-28 15:29:49.664320] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:03.008 [2024-10-28 15:29:49.667895] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:03.008 [2024-10-28 15:29:49.677345] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:03.008 [2024-10-28 15:29:49.677837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.008 [2024-10-28 15:29:49.677881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:03.008 [2024-10-28 15:29:49.677923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:03.008 [2024-10-28 15:29:49.678167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:03.008 [2024-10-28 15:29:49.678413] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:03.008 [2024-10-28 15:29:49.678437] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:03.008 [2024-10-28 15:29:49.678455] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:03.008 [2024-10-28 15:29:49.682009] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:03.008 [2024-10-28 15:29:49.691212] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:03.008 [2024-10-28 15:29:49.691675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.008 [2024-10-28 15:29:49.691707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:03.008 [2024-10-28 15:29:49.691725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:03.008 [2024-10-28 15:29:49.691962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:03.008 [2024-10-28 15:29:49.692203] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:03.008 [2024-10-28 15:29:49.692226] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:03.008 [2024-10-28 15:29:49.692242] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:03.008 [2024-10-28 15:29:49.695795] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:03.008 [2024-10-28 15:29:49.705248] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:03.008 [2024-10-28 15:29:49.705712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.008 [2024-10-28 15:29:49.705747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:03.008 [2024-10-28 15:29:49.705769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:03.008 Malloc0 00:34:03.008 [2024-10-28 15:29:49.706011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:03.008 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.008 [2024-10-28 15:29:49.706257] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:03.008 [2024-10-28 15:29:49.706282] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:03.008 [2024-10-28 15:29:49.706300] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:03.008 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:03.008 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.008 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:03.008 [2024-10-28 15:29:49.709860] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:03.008 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.008 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:03.009 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.009 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:03.009 [2024-10-28 15:29:49.719283] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:03.009 [2024-10-28 15:29:49.719727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.009 [2024-10-28 15:29:49.719759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d040 with addr=10.0.0.2, port=4420 00:34:03.009 [2024-10-28 15:29:49.719777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d040 is same with the state(6) to be set 00:34:03.009 [2024-10-28 15:29:49.720014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d040 (9): Bad file descriptor 00:34:03.009 [2024-10-28 15:29:49.720256] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:03.009 [2024-10-28 15:29:49.720280] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:03.009 [2024-10-28 15:29:49.720295] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:03.009 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.009 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:03.009 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.009 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:03.009 [2024-10-28 15:29:49.723883] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:03.009 [2024-10-28 15:29:49.725713] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:03.009 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.009 15:29:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3314833 00:34:03.009 [2024-10-28 15:29:49.733342] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:03.266 [2024-10-28 15:29:49.894164] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:34:04.765 4072.00 IOPS, 15.91 MiB/s [2024-10-28T14:29:52.564Z] 4673.25 IOPS, 18.25 MiB/s [2024-10-28T14:29:53.580Z] 5137.22 IOPS, 20.07 MiB/s [2024-10-28T14:29:54.955Z] 5519.40 IOPS, 21.56 MiB/s [2024-10-28T14:29:55.888Z] 5819.73 IOPS, 22.73 MiB/s [2024-10-28T14:29:56.823Z] 6071.58 IOPS, 23.72 MiB/s [2024-10-28T14:29:57.758Z] 6287.69 IOPS, 24.56 MiB/s [2024-10-28T14:29:58.692Z] 6485.00 IOPS, 25.33 MiB/s [2024-10-28T14:29:58.692Z] 6637.20 IOPS, 25.93 MiB/s 00:34:11.825 Latency(us) 00:34:11.825 [2024-10-28T14:29:58.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:11.825 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:11.825 Verification LBA range: start 0x0 length 0x4000 00:34:11.825 Nvme1n1 : 15.01 6641.46 25.94 4718.41 0.00 11234.24 922.36 30292.20 00:34:11.825 [2024-10-28T14:29:58.692Z] =================================================================================================================== 00:34:11.825 [2024-10-28T14:29:58.692Z] Total : 6641.46 25.94 4718.41 0.00 11234.24 922.36 30292.20 00:34:12.083 15:29:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:12.083 15:29:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:12.083 15:29:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.083 15:29:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:12.083 15:29:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.083 15:29:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:12.083 15:29:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:12.083 15:29:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:12.084 15:29:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:34:12.084 15:29:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:12.084 15:29:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:34:12.084 15:29:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:12.084 15:29:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:12.084 rmmod nvme_tcp 00:34:12.084 rmmod nvme_fabrics 00:34:12.084 rmmod nvme_keyring 00:34:12.084 15:29:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:12.084 15:29:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:34:12.084 15:29:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:34:12.084 15:29:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3315498 ']' 00:34:12.084 15:29:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3315498 00:34:12.084 15:29:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 3315498 ']' 00:34:12.084 15:29:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 3315498 00:34:12.084 15:29:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:34:12.084 15:29:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:12.084 15:29:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3315498 00:34:12.084 15:29:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:12.084 15:29:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:12.084 15:29:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3315498' 00:34:12.084 killing process with pid 3315498 00:34:12.084 15:29:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 3315498 00:34:12.084 15:29:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 3315498 00:34:12.654 15:29:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:12.654 15:29:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:12.654 15:29:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:12.654 15:29:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:34:12.654 15:29:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:34:12.654 15:29:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:12.654 15:29:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:34:12.654 15:29:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:12.654 15:29:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:12.654 15:29:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.654 15:29:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:12.654 15:29:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.568 15:30:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:14.568 00:34:14.568 real 0m23.696s 00:34:14.568 user 1m0.577s 00:34:14.568 sys 0m5.309s 00:34:14.568 15:30:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:14.568 15:30:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:14.568 ************************************ 00:34:14.568 END TEST nvmf_bdevperf 00:34:14.568 ************************************ 00:34:14.568 15:30:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:14.568 15:30:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:14.568 15:30:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:14.568 15:30:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.568 ************************************ 00:34:14.568 START TEST nvmf_target_disconnect 00:34:14.568 ************************************ 00:34:14.568 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:14.568 * Looking for test storage... 00:34:14.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1689 -- # lcov --version 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:34:14.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.828 --rc genhtml_branch_coverage=1 00:34:14.828 --rc genhtml_function_coverage=1 00:34:14.828 --rc genhtml_legend=1 00:34:14.828 --rc geninfo_all_blocks=1 00:34:14.828 --rc geninfo_unexecuted_blocks=1 00:34:14.828 00:34:14.828 ' 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:34:14.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.828 --rc genhtml_branch_coverage=1 00:34:14.828 --rc genhtml_function_coverage=1 00:34:14.828 --rc genhtml_legend=1 00:34:14.828 --rc geninfo_all_blocks=1 00:34:14.828 --rc geninfo_unexecuted_blocks=1 00:34:14.828 00:34:14.828 ' 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:34:14.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.828 --rc genhtml_branch_coverage=1 00:34:14.828 --rc genhtml_function_coverage=1 00:34:14.828 --rc genhtml_legend=1 00:34:14.828 --rc geninfo_all_blocks=1 00:34:14.828 --rc geninfo_unexecuted_blocks=1 00:34:14.828 00:34:14.828 ' 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:34:14.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.828 --rc genhtml_branch_coverage=1 00:34:14.828 --rc genhtml_function_coverage=1 00:34:14.828 --rc genhtml_legend=1 00:34:14.828 --rc geninfo_all_blocks=1 00:34:14.828 --rc geninfo_unexecuted_blocks=1 00:34:14.828 00:34:14.828 ' 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:14.828 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:14.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:34:14.829 15:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:18.117 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:18.117 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:34:18.117 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:18.117 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:18.117 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:18.117 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:18.117 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:18.117 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:34:18.117 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:18.117 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:34:18.117 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:34:18.117 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:34:18.117 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:34:18.117 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:34:18.117 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:34:18.117 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:18.117 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:18.117 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:18.117 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:18.117 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:34:18.118 Found 0000:84:00.0 (0x8086 - 0x159b) 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:34:18.118 Found 0000:84:00.1 (0x8086 - 0x159b) 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:34:18.118 Found net devices under 0000:84:00.0: cvl_0_0 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:34:18.118 Found net devices under 0000:84:00.1: cvl_0_1 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:18.118 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:18.118 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:34:18.118 00:34:18.118 --- 10.0.0.2 ping statistics --- 00:34:18.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:18.118 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:18.118 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:18.118 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:34:18.118 00:34:18.118 --- 10.0.0.1 ping statistics --- 00:34:18.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:18.118 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:18.118 ************************************ 00:34:18.118 START TEST nvmf_target_disconnect_tc1 00:34:18.118 ************************************ 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:18.118 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:18.119 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:18.119 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:18.119 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:18.119 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:18.119 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:18.119 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:18.119 [2024-10-28 15:30:04.762696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.119 [2024-10-28 15:30:04.762865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa23560 with addr=10.0.0.2, port=4420 00:34:18.119 [2024-10-28 15:30:04.762959] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:18.119 [2024-10-28 15:30:04.763018] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:18.119 [2024-10-28 15:30:04.763053] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:34:18.119 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:18.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:18.119 Initializing NVMe Controllers 00:34:18.119 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:34:18.119 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:18.119 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:18.119 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:18.119 00:34:18.119 real 0m0.200s 00:34:18.119 user 0m0.102s 00:34:18.119 sys 0m0.096s 00:34:18.119 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:18.119 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:18.119 ************************************ 00:34:18.119 END TEST nvmf_target_disconnect_tc1 00:34:18.119 ************************************ 00:34:18.119 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:18.119 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:18.119 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:18.119 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:18.119 ************************************ 00:34:18.119 START TEST nvmf_target_disconnect_tc2 00:34:18.119 ************************************ 00:34:18.119 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:34:18.119 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:18.119 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:18.119 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:18.119 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:18.119 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:18.119 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3318912 00:34:18.119 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:18.119 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3318912 00:34:18.119 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3318912 ']' 00:34:18.119 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:18.119 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:18.119 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:18.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:18.119 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:18.119 15:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:18.119 [2024-10-28 15:30:04.958904] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:34:18.119 [2024-10-28 15:30:04.959081] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:18.379 [2024-10-28 15:30:05.141641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:18.638 [2024-10-28 15:30:05.266742] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:18.638 [2024-10-28 15:30:05.266851] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:18.638 [2024-10-28 15:30:05.266890] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:18.638 [2024-10-28 15:30:05.266929] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:18.638 [2024-10-28 15:30:05.266941] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:18.638 [2024-10-28 15:30:05.270357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:18.638 [2024-10-28 15:30:05.270391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:18.638 [2024-10-28 15:30:05.270451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:34:18.638 [2024-10-28 15:30:05.270456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:18.896 Malloc0 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:18.896 [2024-10-28 15:30:05.589215] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:18.896 [2024-10-28 15:30:05.618021] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3318941 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:18.896 15:30:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:20.798 15:30:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3318912 00:34:20.798 15:30:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:20.798 Read completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Read completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Read completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Read completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Read completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Read completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Read completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Read completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Read completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Read completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Read completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Read completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Read completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Write completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Read completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Read completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Write completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Write completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Read completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Read completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Read completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Write completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Read completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Read completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Read completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Read completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Read completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Write completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Read completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Read completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Read completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Read completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Read completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Read completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Read completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.798 Read completed with error (sct=0, sc=8) 00:34:20.798 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 [2024-10-28 15:30:07.645339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 [2024-10-28 15:30:07.645803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 [2024-10-28 15:30:07.646368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Write completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.799 Read completed with error (sct=0, sc=8) 00:34:20.799 starting I/O failed 00:34:20.800 Read completed with error (sct=0, sc=8) 00:34:20.800 starting I/O failed 00:34:20.800 Read completed with error (sct=0, sc=8) 00:34:20.800 starting I/O failed 00:34:20.800 Read completed with error (sct=0, sc=8) 00:34:20.800 starting I/O failed 00:34:20.800 Write completed with error (sct=0, sc=8) 00:34:20.800 starting I/O failed 00:34:20.800 Write completed with error (sct=0, sc=8) 00:34:20.800 starting I/O failed 00:34:20.800 [2024-10-28 15:30:07.646908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.800 [2024-10-28 15:30:07.647084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.800 [2024-10-28 15:30:07.647130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:20.800 qpair failed and we were unable to recover it. 00:34:20.800 [2024-10-28 15:30:07.647350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.800 [2024-10-28 15:30:07.647378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:20.800 qpair failed and we were unable to recover it. 00:34:20.800 [2024-10-28 15:30:07.647581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.800 [2024-10-28 15:30:07.647629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:20.800 qpair failed and we were unable to recover it. 00:34:20.800 [2024-10-28 15:30:07.647778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.800 [2024-10-28 15:30:07.647806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:20.800 qpair failed and we were unable to recover it. 00:34:20.800 [2024-10-28 15:30:07.647953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.800 [2024-10-28 15:30:07.647981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:20.800 qpair failed and we were unable to recover it. 00:34:20.800 [2024-10-28 15:30:07.648138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.800 [2024-10-28 15:30:07.648162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:20.800 qpair failed and we were unable to recover it. 00:34:20.800 [2024-10-28 15:30:07.648326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.800 [2024-10-28 15:30:07.648378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:20.800 qpair failed and we were unable to recover it. 00:34:20.800 [2024-10-28 15:30:07.648490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.800 [2024-10-28 15:30:07.648516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:20.800 qpair failed and we were unable to recover it. 00:34:20.800 [2024-10-28 15:30:07.648674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.800 [2024-10-28 15:30:07.648715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.800 qpair failed and we were unable to recover it. 00:34:20.800 [2024-10-28 15:30:07.648893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.800 [2024-10-28 15:30:07.648921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.800 qpair failed and we were unable to recover it. 00:34:20.800 [2024-10-28 15:30:07.649131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.800 [2024-10-28 15:30:07.649157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.800 qpair failed and we were unable to recover it. 00:34:20.800 [2024-10-28 15:30:07.649307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.800 [2024-10-28 15:30:07.649382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.800 qpair failed and we were unable to recover it. 00:34:20.800 [2024-10-28 15:30:07.649596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.800 [2024-10-28 15:30:07.649682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.800 qpair failed and we were unable to recover it. 00:34:20.800 [2024-10-28 15:30:07.649824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.800 [2024-10-28 15:30:07.649851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.800 qpair failed and we were unable to recover it. 00:34:20.800 [2024-10-28 15:30:07.649994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.800 [2024-10-28 15:30:07.650020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.800 qpair failed and we were unable to recover it. 00:34:20.800 [2024-10-28 15:30:07.650191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.800 [2024-10-28 15:30:07.650258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.800 qpair failed and we were unable to recover it. 00:34:20.800 [2024-10-28 15:30:07.650488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.800 [2024-10-28 15:30:07.650572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.800 qpair failed and we were unable to recover it. 00:34:20.800 [2024-10-28 15:30:07.650785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.800 [2024-10-28 15:30:07.650812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.800 qpair failed and we were unable to recover it. 00:34:20.800 [2024-10-28 15:30:07.650969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.800 [2024-10-28 15:30:07.651001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.800 qpair failed and we were unable to recover it. 00:34:20.800 [2024-10-28 15:30:07.651253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.800 [2024-10-28 15:30:07.651320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.800 qpair failed and we were unable to recover it. 00:34:20.800 [2024-10-28 15:30:07.651578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.800 [2024-10-28 15:30:07.651644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.800 qpair failed and we were unable to recover it. 00:34:20.800 [2024-10-28 15:30:07.651874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.800 [2024-10-28 15:30:07.651905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.800 qpair failed and we were unable to recover it. 00:34:20.800 [2024-10-28 15:30:07.652119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.800 [2024-10-28 15:30:07.652190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.800 qpair failed and we were unable to recover it. 00:34:20.800 [2024-10-28 15:30:07.652434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.800 [2024-10-28 15:30:07.652500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.800 qpair failed and we were unable to recover it. 00:34:20.800 [2024-10-28 15:30:07.652723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.800 [2024-10-28 15:30:07.652750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.800 qpair failed and we were unable to recover it. 00:34:20.800 [2024-10-28 15:30:07.652931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.800 [2024-10-28 15:30:07.652980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.800 qpair failed and we were unable to recover it. 00:34:20.800 [2024-10-28 15:30:07.653205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.800 [2024-10-28 15:30:07.653231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.800 qpair failed and we were unable to recover it. 00:34:20.800 [2024-10-28 15:30:07.653398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.800 [2024-10-28 15:30:07.653475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.800 qpair failed and we were unable to recover it. 00:34:20.801 [2024-10-28 15:30:07.653775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.801 [2024-10-28 15:30:07.653802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.801 qpair failed and we were unable to recover it. 00:34:20.801 [2024-10-28 15:30:07.653912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.801 [2024-10-28 15:30:07.653953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.801 qpair failed and we were unable to recover it. 00:34:20.801 [2024-10-28 15:30:07.654194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.801 [2024-10-28 15:30:07.654260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.801 qpair failed and we were unable to recover it. 00:34:20.801 [2024-10-28 15:30:07.654575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.801 [2024-10-28 15:30:07.654643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.801 qpair failed and we were unable to recover it. 00:34:20.801 [2024-10-28 15:30:07.654879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.801 [2024-10-28 15:30:07.654915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.801 qpair failed and we were unable to recover it. 00:34:20.801 [2024-10-28 15:30:07.655082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.801 [2024-10-28 15:30:07.655147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.801 qpair failed and we were unable to recover it. 00:34:20.801 [2024-10-28 15:30:07.655476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.801 [2024-10-28 15:30:07.655540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.801 qpair failed and we were unable to recover it. 00:34:20.801 [2024-10-28 15:30:07.655831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.801 [2024-10-28 15:30:07.655857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.801 qpair failed and we were unable to recover it. 00:34:20.801 [2024-10-28 15:30:07.656010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.801 [2024-10-28 15:30:07.656075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.801 qpair failed and we were unable to recover it. 00:34:20.801 [2024-10-28 15:30:07.656323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.801 [2024-10-28 15:30:07.656388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.801 qpair failed and we were unable to recover it. 00:34:20.801 [2024-10-28 15:30:07.656724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.801 [2024-10-28 15:30:07.656751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.801 qpair failed and we were unable to recover it. 00:34:20.801 [2024-10-28 15:30:07.656851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.801 [2024-10-28 15:30:07.656876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.801 qpair failed and we were unable to recover it. 00:34:20.801 [2024-10-28 15:30:07.657037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.801 [2024-10-28 15:30:07.657104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.801 qpair failed and we were unable to recover it. 00:34:20.801 [2024-10-28 15:30:07.657395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.801 [2024-10-28 15:30:07.657460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.801 qpair failed and we were unable to recover it. 00:34:20.801 [2024-10-28 15:30:07.657743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.801 [2024-10-28 15:30:07.657769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.801 qpair failed and we were unable to recover it. 00:34:20.801 [2024-10-28 15:30:07.657955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.801 [2024-10-28 15:30:07.658019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.801 qpair failed and we were unable to recover it. 00:34:20.801 [2024-10-28 15:30:07.658324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.801 [2024-10-28 15:30:07.658349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.801 qpair failed and we were unable to recover it. 00:34:20.801 [2024-10-28 15:30:07.658472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.801 [2024-10-28 15:30:07.658497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.801 qpair failed and we were unable to recover it. 00:34:20.801 [2024-10-28 15:30:07.658727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.801 [2024-10-28 15:30:07.658753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.801 qpair failed and we were unable to recover it. 00:34:20.801 [2024-10-28 15:30:07.658875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.801 [2024-10-28 15:30:07.658901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.801 qpair failed and we were unable to recover it. 00:34:20.801 [2024-10-28 15:30:07.659150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.801 [2024-10-28 15:30:07.659215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.801 qpair failed and we were unable to recover it. 00:34:20.801 [2024-10-28 15:30:07.659523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.801 [2024-10-28 15:30:07.659587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.801 qpair failed and we were unable to recover it. 00:34:20.801 [2024-10-28 15:30:07.659889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.801 [2024-10-28 15:30:07.659919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.801 qpair failed and we were unable to recover it. 00:34:20.801 [2024-10-28 15:30:07.660062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.801 [2024-10-28 15:30:07.660139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.801 qpair failed and we were unable to recover it. 00:34:20.801 [2024-10-28 15:30:07.660403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.801 [2024-10-28 15:30:07.660469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.801 qpair failed and we were unable to recover it. 00:34:20.801 [2024-10-28 15:30:07.660712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.801 [2024-10-28 15:30:07.660739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.801 qpair failed and we were unable to recover it. 00:34:20.801 [2024-10-28 15:30:07.660891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.801 [2024-10-28 15:30:07.660918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.801 qpair failed and we were unable to recover it. 00:34:20.801 [2024-10-28 15:30:07.661062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.801 [2024-10-28 15:30:07.661088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.801 qpair failed and we were unable to recover it. 00:34:20.801 [2024-10-28 15:30:07.661306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.801 [2024-10-28 15:30:07.661337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.801 qpair failed and we were unable to recover it. 00:34:20.801 [2024-10-28 15:30:07.661563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.801 [2024-10-28 15:30:07.661628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.801 qpair failed and we were unable to recover it. 00:34:20.802 [2024-10-28 15:30:07.661960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.802 [2024-10-28 15:30:07.662027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.802 qpair failed and we were unable to recover it. 00:34:20.802 [2024-10-28 15:30:07.662336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.802 [2024-10-28 15:30:07.662361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.802 qpair failed and we were unable to recover it. 00:34:20.802 [2024-10-28 15:30:07.662513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.802 [2024-10-28 15:30:07.662587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.802 qpair failed and we were unable to recover it. 00:34:20.802 [2024-10-28 15:30:07.662890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.802 [2024-10-28 15:30:07.662960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.802 qpair failed and we were unable to recover it. 00:34:20.802 [2024-10-28 15:30:07.663221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.802 [2024-10-28 15:30:07.663246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:20.802 qpair failed and we were unable to recover it. 00:34:21.084 [2024-10-28 15:30:07.663386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.084 [2024-10-28 15:30:07.663452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.084 qpair failed and we were unable to recover it. 00:34:21.084 [2024-10-28 15:30:07.663720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.084 [2024-10-28 15:30:07.663785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.084 qpair failed and we were unable to recover it. 00:34:21.084 [2024-10-28 15:30:07.664074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.084 [2024-10-28 15:30:07.664100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.084 qpair failed and we were unable to recover it. 00:34:21.084 [2024-10-28 15:30:07.664287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.084 [2024-10-28 15:30:07.664365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.084 qpair failed and we were unable to recover it. 00:34:21.084 [2024-10-28 15:30:07.664625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.084 [2024-10-28 15:30:07.664711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.084 qpair failed and we were unable to recover it. 00:34:21.084 [2024-10-28 15:30:07.664983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.084 [2024-10-28 15:30:07.665028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.084 qpair failed and we were unable to recover it. 00:34:21.084 [2024-10-28 15:30:07.665250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.084 [2024-10-28 15:30:07.665319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.084 qpair failed and we were unable to recover it. 00:34:21.084 [2024-10-28 15:30:07.665671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.084 [2024-10-28 15:30:07.665742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.084 qpair failed and we were unable to recover it. 00:34:21.084 [2024-10-28 15:30:07.666036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.084 [2024-10-28 15:30:07.666061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.084 qpair failed and we were unable to recover it. 00:34:21.084 [2024-10-28 15:30:07.666307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.084 [2024-10-28 15:30:07.666392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.084 qpair failed and we were unable to recover it. 00:34:21.084 [2024-10-28 15:30:07.666713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.084 [2024-10-28 15:30:07.666779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.084 qpair failed and we were unable to recover it. 00:34:21.084 [2024-10-28 15:30:07.666997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.084 [2024-10-28 15:30:07.667021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.084 qpair failed and we were unable to recover it. 00:34:21.084 [2024-10-28 15:30:07.667205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.084 [2024-10-28 15:30:07.667269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.084 qpair failed and we were unable to recover it. 00:34:21.084 [2024-10-28 15:30:07.667564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.084 [2024-10-28 15:30:07.667628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.084 qpair failed and we were unable to recover it. 00:34:21.084 [2024-10-28 15:30:07.667938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.084 [2024-10-28 15:30:07.667963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.084 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.668156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.668221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.668449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.668513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.668774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.668801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.668956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.669020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.669297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.669361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.669635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.669718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.669938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.669964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.670134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.670159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.670310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.670337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.670543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.670569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.670806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.670831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.671024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.671048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.671265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.671289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.671412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.671446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.671676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.671715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.671862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.671886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.671980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.672004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.672139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.672163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.672395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.672424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.672604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.672628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.672862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.672894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.673058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.673083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.673263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.673287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.673490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.673515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.673643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.673683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.673823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.673849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.674089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.674116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.674333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.674358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.674555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.674580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.674780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.674807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.674947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.674972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.675211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.675236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.675402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.675427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.675606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.675711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.675955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.675980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.676155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.676179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.676343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.676367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.676587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.676612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.676833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.085 [2024-10-28 15:30:07.676858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.085 qpair failed and we were unable to recover it. 00:34:21.085 [2024-10-28 15:30:07.677057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.677082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.677311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.677335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.677451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.677474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.677641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.677682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.677856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.677895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.678070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.678098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.678244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.678267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.678514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.678544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.678716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.678741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.678953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.678977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.679211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.679234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.679427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.679451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.679622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.679646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.679852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.679883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.680030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.680053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.680273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.680297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.680487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.680511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.680719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.680745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.680910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.680949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.681133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.681162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.681297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.681338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.681469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.681493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.681668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.681708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.681962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.681986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.682203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.682230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.682423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.682449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.682658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.682683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.682921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.682945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.683066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.683091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.683225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.683249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.683424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.683448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.683685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.683725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.683890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.683914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.684044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.684069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.684251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.684275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.684440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.684468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.684637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.684668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.684775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.684801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.684940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.684965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.685134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.685158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.086 qpair failed and we were unable to recover it. 00:34:21.086 [2024-10-28 15:30:07.685407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.086 [2024-10-28 15:30:07.685431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.685530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.685554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.685735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.685760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.685895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.685920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.686108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.686132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.686244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.686269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.686487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.686512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.686691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.686717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.686879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.686904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.687058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.687082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.687256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.687280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.687435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.687500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.687708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.687748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.687853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.687878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.688050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.688074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.688278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.688304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.688461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.688486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.688632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.688674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.688878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.688904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.689146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.689175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.689308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.689333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.689472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.689497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.689735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.689760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.690045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.690075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.690304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.690329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.690552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.690576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.690802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.690828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.691086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.691111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.691238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.691261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.691409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.691436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.691633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.691666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.691921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.691945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.692068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.692095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.692353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.692377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.692481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.692507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.692663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.692692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.087 [2024-10-28 15:30:07.692877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.087 [2024-10-28 15:30:07.692904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.087 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.693029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.693054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.693288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.693313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.693523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.693599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.693847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.693873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.693980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.694004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.694172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.694211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.694394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.694418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.694574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.694640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.694923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.694965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.695182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.695209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.695433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.695457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.695664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.695690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.695849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.695874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.696090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.696114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.696288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.696312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.696487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.696518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.696750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.696775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.696959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.696982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.697079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.697104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.697264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.697292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.697501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.697525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.697681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.697720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.697856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.697890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.698059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.698088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.698247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.698270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.698432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.698456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.698608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.698696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.698877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.698905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.699090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.699115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.699257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.699280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.699417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.699455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.699685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.699712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.699869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.699900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.700035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.700058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.700235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.700260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.700449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.700473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.700658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.700699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.088 qpair failed and we were unable to recover it. 00:34:21.088 [2024-10-28 15:30:07.700826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.088 [2024-10-28 15:30:07.700850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.089 qpair failed and we were unable to recover it. 00:34:21.089 [2024-10-28 15:30:07.701047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.089 [2024-10-28 15:30:07.701071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.089 qpair failed and we were unable to recover it. 00:34:21.089 [2024-10-28 15:30:07.701204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.089 [2024-10-28 15:30:07.701227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.089 qpair failed and we were unable to recover it. 00:34:21.089 [2024-10-28 15:30:07.701391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.089 [2024-10-28 15:30:07.701415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.089 qpair failed and we were unable to recover it. 00:34:21.089 [2024-10-28 15:30:07.701552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.089 [2024-10-28 15:30:07.701590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.089 qpair failed and we were unable to recover it. 00:34:21.089 [2024-10-28 15:30:07.701734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.089 [2024-10-28 15:30:07.701773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.089 qpair failed and we were unable to recover it. 00:34:21.089 [2024-10-28 15:30:07.701939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.089 [2024-10-28 15:30:07.702004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.089 qpair failed and we were unable to recover it. 00:34:21.089 [2024-10-28 15:30:07.702211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.089 [2024-10-28 15:30:07.702234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.089 qpair failed and we were unable to recover it. 00:34:21.089 [2024-10-28 15:30:07.702414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.089 [2024-10-28 15:30:07.702437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.089 qpair failed and we were unable to recover it. 00:34:21.089 [2024-10-28 15:30:07.702633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.089 [2024-10-28 15:30:07.702722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.089 qpair failed and we were unable to recover it. 00:34:21.089 [2024-10-28 15:30:07.702874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.089 [2024-10-28 15:30:07.702905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.089 qpair failed and we were unable to recover it. 00:34:21.089 [2024-10-28 15:30:07.703062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.089 [2024-10-28 15:30:07.703103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.089 qpair failed and we were unable to recover it. 00:34:21.089 [2024-10-28 15:30:07.703298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.089 [2024-10-28 15:30:07.703328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.089 qpair failed and we were unable to recover it. 00:34:21.089 [2024-10-28 15:30:07.703522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.089 [2024-10-28 15:30:07.703549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.089 qpair failed and we were unable to recover it. 00:34:21.089 [2024-10-28 15:30:07.703780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.089 [2024-10-28 15:30:07.703805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.089 qpair failed and we were unable to recover it. 00:34:21.089 [2024-10-28 15:30:07.703990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.089 [2024-10-28 15:30:07.704014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.089 qpair failed and we were unable to recover it. 00:34:21.089 [2024-10-28 15:30:07.704258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.089 [2024-10-28 15:30:07.704282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.089 qpair failed and we were unable to recover it. 00:34:21.089 [2024-10-28 15:30:07.704455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.089 [2024-10-28 15:30:07.704478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.089 qpair failed and we were unable to recover it. 00:34:21.089 [2024-10-28 15:30:07.704618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.089 [2024-10-28 15:30:07.704642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.089 qpair failed and we were unable to recover it. 00:34:21.089 [2024-10-28 15:30:07.704806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.089 [2024-10-28 15:30:07.704831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.089 qpair failed and we were unable to recover it. 00:34:21.089 [2024-10-28 15:30:07.705049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.089 [2024-10-28 15:30:07.705074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.089 qpair failed and we were unable to recover it. 00:34:21.089 [2024-10-28 15:30:07.705179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.089 [2024-10-28 15:30:07.705204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.089 qpair failed and we were unable to recover it. 00:34:21.089 [2024-10-28 15:30:07.705348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.089 [2024-10-28 15:30:07.705378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.089 qpair failed and we were unable to recover it. 00:34:21.089 [2024-10-28 15:30:07.705595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.089 [2024-10-28 15:30:07.705619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.089 qpair failed and we were unable to recover it. 00:34:21.089 [2024-10-28 15:30:07.705799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.089 [2024-10-28 15:30:07.705825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.089 qpair failed and we were unable to recover it. 00:34:21.089 [2024-10-28 15:30:07.706015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.089 [2024-10-28 15:30:07.706044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.089 qpair failed and we were unable to recover it. 00:34:21.089 [2024-10-28 15:30:07.706268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.089 [2024-10-28 15:30:07.706292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.089 qpair failed and we were unable to recover it. 00:34:21.089 [2024-10-28 15:30:07.706524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.089 [2024-10-28 15:30:07.706549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.089 qpair failed and we were unable to recover it. 00:34:21.089 [2024-10-28 15:30:07.706678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.089 [2024-10-28 15:30:07.706718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.089 qpair failed and we were unable to recover it. 00:34:21.089 [2024-10-28 15:30:07.706841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.089 [2024-10-28 15:30:07.706871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.089 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.707004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.707028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.707140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.707165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.707279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.707306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.707434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.707473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.707644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.707692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.707829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.707854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.708108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.708132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.708239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.708262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.708423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.708447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.708675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.708700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.708835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.708859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.708999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.709022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.709160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.709184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.709353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.709378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.709481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.709505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.709704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.709729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.709871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.709900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.710111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.710149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.710324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.710349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.710600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.710644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.710794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.710824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.710959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.710988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.711208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.711232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.711399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.711421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.711563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.711596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.711831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.711856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.711983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.712006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.712179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.712204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.712400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.712424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.712557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.712581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.712838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.712863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.713011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.713035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.713179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.713217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.713359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.713383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.713619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.713667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.713848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.713881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.714004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.714028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.714281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.714311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.714502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.714528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.714635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.714669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.714845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.714881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.715127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.715150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.715289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.715316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.715461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.715499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.715708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.715733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.715862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.715893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.716071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.716107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.716328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.716351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.716519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.716543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.716711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.716736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.716886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.716918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.717068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.717098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.717213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.717237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.717395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.717419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.717608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.717647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.717795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.717819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.718029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.718052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.718184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.718208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.718420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.718443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.718620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.718740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.718937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.718961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.719200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.719224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.719331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.719355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.719528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.719554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.719697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.719727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.719849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.719874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.720026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.720049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.720231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.720256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.720468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.720500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.720695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.720730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.720948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.720972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.721139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.721163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.721305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.721329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.721485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.721524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.721668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.090 [2024-10-28 15:30:07.721712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.090 qpair failed and we were unable to recover it. 00:34:21.090 [2024-10-28 15:30:07.721925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.721953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.722155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.722179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.722311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.722334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.722482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.722509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.722638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.722695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.722873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.722898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.723159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.723183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.723365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.723389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.723580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.723603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.723804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.723830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.723960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.723985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.724193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.724217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.724354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.724378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.724545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.724584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.724753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.724788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.725012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.725036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.725187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.725211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.725359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.725398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.725526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.725556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.725739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.725764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.725925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.725949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.726086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.726112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.726319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.726358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.726549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.726572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.726754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.726779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.726971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.726995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.727228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.727253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.727508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.727532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.727701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.727726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.727905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.727930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.728049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.728079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.728222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.728261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.728440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.728463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.728599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.728623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.728864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.728894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.729040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.729063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.729265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.729289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.729428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.729452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.729567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.729592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.729738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.729763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.729889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.729917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.730156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.730179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.730417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.730441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.730716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.730741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.730923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.730965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.731143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.731167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.731371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.731395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.731526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.731550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.731715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.731740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.731949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.731973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.732100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.732124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.732364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.732388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.732536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.732559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.732642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.732693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.732861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.732887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.733034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.733058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.733173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.733198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.733350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.733374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.733549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.733573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.733759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.733784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.733980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.734004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.734132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.734156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.734274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.734301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.734488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.734514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.734704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.734737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.734873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.734900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.735055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.735080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.735237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.735264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.735407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.735444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.091 [2024-10-28 15:30:07.735600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.091 [2024-10-28 15:30:07.735624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.091 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.735818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.735843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.736060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.736085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.736212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.736251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.736408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.736447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.736687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.736711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.736842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.736866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.737025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.737049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.737197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.737240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.737379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.737403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.737634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.737682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.737818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.737842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.737973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.738007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.738174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.738198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.738370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.738393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.738630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.738672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.738828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.738853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.739034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.739057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.739231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.739254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.739407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.739431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.739675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.739701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.739882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.739906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.740089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.740113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.740289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.740314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.740430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.740455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.740673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.740724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.740857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.740886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.741035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.741060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.741222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.741252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.741467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.741491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.741665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.741692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.741823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.741854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.742035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.742058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.742240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.742264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.742417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.742440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.742577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.742601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.742827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.742854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.742997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.743022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.743140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.743182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.743422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.743446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.743645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.743680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.743824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.743849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.743987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.744011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.744216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.744240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.744457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.744480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.744714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.744739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.744958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.744983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.745113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.745147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.745334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.745358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.745496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.745523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.745726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.745751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.745870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.745900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.746138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.746162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.092 qpair failed and we were unable to recover it. 00:34:21.092 [2024-10-28 15:30:07.746346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.092 [2024-10-28 15:30:07.746370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.746557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.746581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.746725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.746807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.747083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.747110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.747335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.747359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.747520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.747590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.747847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.747876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.748065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.748093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.748240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.748315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.748518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.748596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.748926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.748993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.749156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.749180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.749306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.749331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.749532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.749599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.749741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.749766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.749983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.750022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.750185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.750210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.750395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.750419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.750538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.750609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.750822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.750846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.751156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.751200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.751472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.751547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.751676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.751727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.751848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.751875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.752097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.752121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.752259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.752287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.752500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.752528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.752754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.752779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.752912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.752936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.753112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.753137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.753256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.753294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.753474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.753498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.753700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.753725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.753858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.753887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.753999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.754022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.754153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.754178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.754376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.754399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.754617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.754641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.093 [2024-10-28 15:30:07.754782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.093 [2024-10-28 15:30:07.754807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.093 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.754995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.755034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.755202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.755231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.755361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.755399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.755597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.755620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.755808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.755832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.756015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.756039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.756171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.756194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.756399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.756423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.756544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.756568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.756657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.756701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.756863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.756891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.757040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.757064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.757223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.757261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.757425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.757449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.757646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.757703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.757891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.757915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.758037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.758061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.758161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.758185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.758342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.758366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.758603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.758627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.758801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.758827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.759020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.759043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.759276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.759301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.759489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.759565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.759689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.759728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.759887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.759911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.760026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.760068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.760240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.760278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.760527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.760593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.760835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.760911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.761149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.761216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.761442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.761507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.761720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.761745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.761956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.761986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.762170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.762200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.762342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.762367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.762561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.762585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.762795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.762820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.763037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.763067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.094 qpair failed and we were unable to recover it. 00:34:21.094 [2024-10-28 15:30:07.763340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.094 [2024-10-28 15:30:07.763364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.763582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.763606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.763752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.763777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.763911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.763949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.764189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.764213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.764394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.764418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.764633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.764727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.764883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.764909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.765097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.765121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.765316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.765339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.765507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.765535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.765690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.765724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.765846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.765889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.766113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.766138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.766358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.766382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.766561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.766627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.766852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.766876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.767104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.767130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.767288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.767314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.767542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.767611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.767850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.767877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.768004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.768046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.768196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.768262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.768472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.768547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.768810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.768835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.768947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.768971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.769179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.769213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.769372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.769399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.769550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.769619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.769843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.769868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.770056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.770080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.770329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.770353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.770499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.095 [2024-10-28 15:30:07.770527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.095 qpair failed and we were unable to recover it. 00:34:21.095 [2024-10-28 15:30:07.770720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.770760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.770915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.770939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.771136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.771160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.771342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.771366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.771553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.771577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.771808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.771834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.772066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.772093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.772235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.772258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.772447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.772471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.772633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.772684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.772868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.772898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.773131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.773155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.773286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.773312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.773536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.773560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.773730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.773755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.773878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.773917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.774012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.774036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.774210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.774249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.774424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.774447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.774666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.774692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.774882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.774907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.775063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.775090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.775260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.775284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.775438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.775467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.775665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.775705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.775859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.775890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.776019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.776043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.776199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.776238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.776419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.776443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.776707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.776733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.776873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.776901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.777011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.777035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.777251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.777275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.777504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.777533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.777741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.777770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.777881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.777906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.778097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.778135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.778364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.778388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.778618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.096 [2024-10-28 15:30:07.778743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.096 qpair failed and we were unable to recover it. 00:34:21.096 [2024-10-28 15:30:07.779013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.779052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.779295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.779319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.779514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.779590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.779851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.779876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.780043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.780070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.780245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.780324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.780509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.780578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.780782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.780807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.780954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.780993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.781187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.781253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.781453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.781518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.781757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.781790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.781988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.782016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.782154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.782177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.782301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.782326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.782470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.782494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.782677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.782702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.782840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.782868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.783020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.783060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.783201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.783225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.783398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.783436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.783605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.783642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.783770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.783794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.784012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.784036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.784167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.784190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.784447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.784470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.784691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.784717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.784940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.784965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.785138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.785162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.785410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.785434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.785679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.785704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.785849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.785878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.786027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.786051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.786180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.786204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.786410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.786434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.786614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.786681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.786881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.786906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.787043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.097 [2024-10-28 15:30:07.787066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.097 qpair failed and we were unable to recover it. 00:34:21.097 [2024-10-28 15:30:07.787248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.787272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.787401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.787439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.787612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.787661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.787800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.787825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.787983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.788020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.788261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.788286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.788552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.788619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.788852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.788929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.789056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.789129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.789319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.789393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.789643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.789723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.789855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.789879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.790107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.790131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.790316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.790340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.790481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.790505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.790743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.790770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.790918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.790943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.791119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.791143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.791398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.791422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.791561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.791585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.791752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.791776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.791916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.791954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.792134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.792159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.792319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.792343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.792531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.792555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.792691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.792720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.792950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.792973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.793123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.793147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.793337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.793361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.793499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.793523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.793753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.793778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.794035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.794059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.794219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.794244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.794383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.794422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.794543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.794568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.794800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.794826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.795009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.795049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.795219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.795247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.795406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.795430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.795684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.795709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.795856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.795885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.796040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.796063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.796211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.796235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.796418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.796442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.796537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.796561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.796740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.796765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.796889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.796927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.797039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.797063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.797235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.098 [2024-10-28 15:30:07.797278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.098 qpair failed and we were unable to recover it. 00:34:21.098 [2024-10-28 15:30:07.797417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.797440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.797695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.797720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.797887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.797912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.798107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.798130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.798267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.798291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.798484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.798511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.798668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.798692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.798862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.798889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.799077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.799101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.799246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.799270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.799470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.799498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.799704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.799729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.799897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.799921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.800159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.800183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.800375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.800399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.800531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.800558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.800687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.800712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.800919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.800957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.801139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.801162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.801306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.801333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.801538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.801562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.801725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.801749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.801909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.801932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.802073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.802103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.802223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.802247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.802387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.802412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.802571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.802596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.802753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.802800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.802929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.802957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.803096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.803135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.803234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.803258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.803387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.803411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.803638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.803688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.803850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.803876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.804050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.804074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.804326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.804350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.804562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.804628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.804873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.804903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.804999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.805039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.805199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.805223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.805360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.805384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.805546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.805572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.805753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.805787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.805905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.805943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.806107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.806131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.806344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.806367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.806567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.806591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.806778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.806804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.806982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.807026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.807150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.807177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.807318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.807344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.807531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.807554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.807663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.807727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.807958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.807982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.808161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.808184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.808289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.808313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.808554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.808578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.808793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.808818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.808995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.809019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.809208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.809233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.809454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.809477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.809574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.809598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.809866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.809894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.810146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.810169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.810319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.810343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.810520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.810544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.810667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.810691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.810851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.810880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.811120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.099 [2024-10-28 15:30:07.811149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.099 qpair failed and we were unable to recover it. 00:34:21.099 [2024-10-28 15:30:07.811320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.811344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.811567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.811634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.811850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.811875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.812083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.812110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.812271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.812294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.812445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.812522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.812783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.812808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.812963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.812991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.813231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.813255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.813436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.813460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.813663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.813690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.813893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.813918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.814058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.814086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.814231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.814255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.814447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.814472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.814583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.814621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.814805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.814831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.815071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.815096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.815259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.815283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.815391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.815421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.815620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.815711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.815937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.815962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.816080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.816104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.816234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.816258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.816460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.816483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.816698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.816723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.816963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.816987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.817204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.817227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.817520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.817544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.817722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.817748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.817875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.817900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.818052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.818078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.818267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.818296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.818456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.818493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.818715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.818739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.818918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.818942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.819101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.819125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.819314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.819346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.819530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.819554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.819716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.819744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.819926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.819950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.820149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.820173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.820328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.820351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.820514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.820538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.820730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.820755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.820877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.820901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.821141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.821165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.821359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.821388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.821537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.821560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.821803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.821828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.821989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.822013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.822178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.822201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.822396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.822421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.822621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.822682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.822875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.822900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.823057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.823083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.823279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.823307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.823490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.823517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.823636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.823670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.823907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.823932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.824173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.824197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.824417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.824441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.824609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.824705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.824911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.824934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.825226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.825250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.825412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.825442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.825607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.825631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.825764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.825788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.825998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.826022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.826191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.826215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.826351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.826375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.826622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.826646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.826803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.826827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.826987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.827012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.827184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.827208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.827387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.827411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.827597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.100 [2024-10-28 15:30:07.827687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.100 qpair failed and we were unable to recover it. 00:34:21.100 [2024-10-28 15:30:07.827928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.827967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.828107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.828130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.828367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.828395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.828638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.828735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.828900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.828925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.829043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.829067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.829213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.829251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.829363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.829387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.829619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.829643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.829902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.829927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.830098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.830121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.830303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.830327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.830476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.830506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.830694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.830719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.830831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.830855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.831033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.831071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.831175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.831221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.831379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.831413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.831524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.831550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.831777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.831801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.831916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.831940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.832123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.832155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.832324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.832347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.832457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.832480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.832624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.832648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.832818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.832843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.833022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.833045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.833256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.833280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.833393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.833416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.833582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.833606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.833812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.833837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.834072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.834101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.834340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.834364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.834512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.834592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.834864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.834942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.835194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.835218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.835452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.835519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.835719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.835744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.835942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.835966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.836229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.836253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.836489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.836512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.836760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.836786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.836949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.836978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.837145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.837169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.837345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.837370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.837536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.837560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.837775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.837800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.837960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.837984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.838160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.838184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.838306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.838345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.838500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.838529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.838696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.838722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.838851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.838895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.839085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.839114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.839268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.839292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.839506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.839532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.839666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.839711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.839949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.839978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.840219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.840243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.840400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.840425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.840631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.840677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.840817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.840841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.840992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.841017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.841255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.841279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.841403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.841426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.841602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.841626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.841776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.841812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.842051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.842074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.842220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.842243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.842483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.842515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.101 [2024-10-28 15:30:07.842673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.101 [2024-10-28 15:30:07.842698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.101 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.842850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.842876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.843056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.843082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.843247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.843271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.843500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.843528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.843769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.843794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.843932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.843957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.844193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.844216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.844387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.844411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.844630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.844681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.844904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.844928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.845151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.845175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.845337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.845362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.845619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.845711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.845936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.845961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.846141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.846175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.846398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.846422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.846674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.846700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.846895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.846919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.847104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.847128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.847272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.847298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.847511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.847586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.847750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.847786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.848022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.848045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.848217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.848240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.848433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.848458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.848641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.848692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.848830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.848853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.849083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.849111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.849331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.849355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.849536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.849560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.849699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.849723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.849866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.849894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.850102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.850125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.850258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.850282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.850522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.850546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.850728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.850753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.850843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.850867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.851035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.851074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.851251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.851281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.851482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.851550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.851791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.851817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.852052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.852081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.852267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.852301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.852487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.852561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.852800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.852826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.852968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.852993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.853123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.853147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.853328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.853353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.853519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.853593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.853818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.853844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.854068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.854095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.854270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.854298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.854480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.854508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.854618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.854720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.854898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.854924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.855106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.855130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.855364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.855389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.855784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.855809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.856001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.856028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.856174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.856198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.856408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.856432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.856591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.856615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.856795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.856864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.857184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.857250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.857524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.857592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.857838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.857915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.858114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.858179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.858408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.858473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.858717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.858801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.858944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.858968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.859146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.859212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.859537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.859603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.859883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.859957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.860193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.860217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.860469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.860541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.860861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.860904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.102 [2024-10-28 15:30:07.861133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.102 [2024-10-28 15:30:07.861199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.102 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.861509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.861532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.861773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.861850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.862133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.862202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.862514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.862590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.862881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.862908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.863098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.863164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.863460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.863535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.863847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.863917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.864171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.864199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.864397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.864463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.864767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.864843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.865106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.865172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.865484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.865509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.865705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.865772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.866084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.866160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.866448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.866515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.866784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.866817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.867005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.867077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.867383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.867452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.867780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.867847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.868152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.868179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.868355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.868430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.868735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.868804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.869068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.869141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.869442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.869466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.869712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.869781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.870066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.870137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.870403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.870469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.870789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.870814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.871008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.871074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.871369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.871441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.871720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.871788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.872002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.872026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.872254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.872320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.872639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.872725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.873052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.873125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.873445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.873469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.873711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.873787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.874092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.874159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.874431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.874505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.874808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.874834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.875055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.875142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.875459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.875529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.875864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.875941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.103 [2024-10-28 15:30:07.876245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.103 [2024-10-28 15:30:07.876270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.103 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.876470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.876541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.876835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.876907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.877224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.877290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.877543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.877571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.877710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.877761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.878033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.878109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.878440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.878508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.878827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.878853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.879054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.879120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.879471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.879539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.879876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.879953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.880268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.880292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.880550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.880617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.880915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.880981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.881265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.881336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.881674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.881700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.881947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.882013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.882280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.882355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.882633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.882724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.882962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.883001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.883197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.883262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.883545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.883611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.883949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.884017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.884304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.884329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.884503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.884577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.884926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.884997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.885309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.885374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.885660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.885688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.885847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.885923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.886229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.886294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.886606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.886699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.887009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.887034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.887229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.887294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.887556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.887625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.887894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.887961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.888282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.888306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.104 [2024-10-28 15:30:07.888496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.104 [2024-10-28 15:30:07.888573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.104 qpair failed and we were unable to recover it. 00:34:21.105 [2024-10-28 15:30:07.888906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.105 [2024-10-28 15:30:07.888975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.105 qpair failed and we were unable to recover it. 00:34:21.105 [2024-10-28 15:30:07.889273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.105 [2024-10-28 15:30:07.889343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.105 qpair failed and we were unable to recover it. 00:34:21.105 [2024-10-28 15:30:07.889676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.105 [2024-10-28 15:30:07.889702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.105 qpair failed and we were unable to recover it. 00:34:21.105 [2024-10-28 15:30:07.889917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.105 [2024-10-28 15:30:07.889986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.105 qpair failed and we were unable to recover it. 00:34:21.105 [2024-10-28 15:30:07.890260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.105 [2024-10-28 15:30:07.890326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.105 qpair failed and we were unable to recover it. 00:34:21.105 [2024-10-28 15:30:07.890599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.105 [2024-10-28 15:30:07.890689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.105 qpair failed and we were unable to recover it. 00:34:21.105 [2024-10-28 15:30:07.891011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.105 [2024-10-28 15:30:07.891051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.105 qpair failed and we were unable to recover it. 00:34:21.105 [2024-10-28 15:30:07.891224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.105 [2024-10-28 15:30:07.891291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.105 qpair failed and we were unable to recover it. 00:34:21.105 [2024-10-28 15:30:07.891554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.105 [2024-10-28 15:30:07.891629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.105 qpair failed and we were unable to recover it. 00:34:21.105 [2024-10-28 15:30:07.891934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.105 [2024-10-28 15:30:07.892000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.105 qpair failed and we were unable to recover it. 00:34:21.105 [2024-10-28 15:30:07.892278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.105 [2024-10-28 15:30:07.892302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.105 qpair failed and we were unable to recover it. 00:34:21.105 [2024-10-28 15:30:07.892487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.105 [2024-10-28 15:30:07.892558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.105 qpair failed and we were unable to recover it. 00:34:21.105 [2024-10-28 15:30:07.892913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.105 [2024-10-28 15:30:07.892980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.105 qpair failed and we were unable to recover it. 00:34:21.105 [2024-10-28 15:30:07.893267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.105 [2024-10-28 15:30:07.893337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.105 qpair failed and we were unable to recover it. 00:34:21.105 [2024-10-28 15:30:07.893678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.105 [2024-10-28 15:30:07.893703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.105 qpair failed and we were unable to recover it. 00:34:21.105 [2024-10-28 15:30:07.893849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.105 [2024-10-28 15:30:07.893915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.105 qpair failed and we were unable to recover it. 00:34:21.105 [2024-10-28 15:30:07.894187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.105 [2024-10-28 15:30:07.894259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.105 qpair failed and we were unable to recover it. 00:34:21.105 [2024-10-28 15:30:07.894561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.105 [2024-10-28 15:30:07.894630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.105 qpair failed and we were unable to recover it. 00:34:21.105 [2024-10-28 15:30:07.894963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.105 [2024-10-28 15:30:07.895003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.105 qpair failed and we were unable to recover it. 00:34:21.105 [2024-10-28 15:30:07.895146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.105 [2024-10-28 15:30:07.895171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.105 qpair failed and we were unable to recover it. 00:34:21.105 [2024-10-28 15:30:07.895467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.105 [2024-10-28 15:30:07.895541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.105 qpair failed and we were unable to recover it. 00:34:21.105 [2024-10-28 15:30:07.895910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.105 [2024-10-28 15:30:07.895979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.105 qpair failed and we were unable to recover it. 00:34:21.105 [2024-10-28 15:30:07.896282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.105 [2024-10-28 15:30:07.896307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.105 qpair failed and we were unable to recover it. 00:34:21.105 [2024-10-28 15:30:07.896519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.105 [2024-10-28 15:30:07.896586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.105 qpair failed and we were unable to recover it. 00:34:21.105 [2024-10-28 15:30:07.896893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.105 [2024-10-28 15:30:07.896965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.105 qpair failed and we were unable to recover it. 00:34:21.105 [2024-10-28 15:30:07.897184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.105 [2024-10-28 15:30:07.897252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.105 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.897526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.897555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.897740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.897809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.898015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.898081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.898325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.898401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.898642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.898699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.898799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.898879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.899129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.899197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.899450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.899515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.899723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.899748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.899894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.899921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.900230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.900296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.900553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.900621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.900918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.900945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.901075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.901161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.901457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.901524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.901878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.901959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.902207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.902231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.902413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.902480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.902762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.902834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.903169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.903235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.903536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.903561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.903753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.903821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.904129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.904195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.904482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.904554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.904820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.904845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.904953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.905002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.905211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.905277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.905526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.905605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.905868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.905895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.906025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.906064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.906226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.906297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.906497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.906562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.906786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.906827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.906949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.907007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.907266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.907331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.907565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.907641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.907874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.907901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.908028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.908054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.908274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.908340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.908543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.908609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.908856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.908884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.909047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.909073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.909234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.909300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.909568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.909700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.909861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.909889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.910064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.910103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.910271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.910337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.910573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.910671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.910944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.911010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.911257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.911286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.911389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.911414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.911693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.911761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.911998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.912065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.912340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.912393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.912573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.912642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.912973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.913040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.913301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.913378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.913586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.913610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.913840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.913917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.914184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.914249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.914507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.914583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.914823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.914865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.915035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.915101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.915310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.915376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.106 qpair failed and we were unable to recover it. 00:34:21.106 [2024-10-28 15:30:07.915684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.106 [2024-10-28 15:30:07.915754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.916047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.916088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.916243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.916317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.916535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.916603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.916841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.916921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.917166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.917192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.917295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.917321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.917490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.917557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.917818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.917885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.918140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.918165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.918350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.918418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.918640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.918734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.919022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.919088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.919411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.919436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.919637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.919724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.919955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.919985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.920126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.920155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.920417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.920445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.920595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.920683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.920951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.921016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.921321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.921398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.921620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.921666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.921780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.921844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.922072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.922139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.922357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.922426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.922647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.922730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.922856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.922885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.923013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.923092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.923348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.923414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.923641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.923731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.923864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.923891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.924050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.924118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.924352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.924417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.924611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.924638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.924771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.924797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.924919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.924983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.925155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.925220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.925427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.925450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.925644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.925741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.925949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.926013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.926218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.926283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.107 [2024-10-28 15:30:07.926504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.107 [2024-10-28 15:30:07.926529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.107 qpair failed and we were unable to recover it. 00:34:21.386 [2024-10-28 15:30:07.926695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.386 [2024-10-28 15:30:07.926722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.386 qpair failed and we were unable to recover it. 00:34:21.386 [2024-10-28 15:30:07.926828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.386 [2024-10-28 15:30:07.926855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.386 qpair failed and we were unable to recover it. 00:34:21.386 [2024-10-28 15:30:07.926972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.386 [2024-10-28 15:30:07.926998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.386 qpair failed and we were unable to recover it. 00:34:21.386 [2024-10-28 15:30:07.927119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.386 [2024-10-28 15:30:07.927145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.386 qpair failed and we were unable to recover it. 00:34:21.386 [2024-10-28 15:30:07.927267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.386 [2024-10-28 15:30:07.927293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.386 qpair failed and we were unable to recover it. 00:34:21.386 [2024-10-28 15:30:07.927471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.386 [2024-10-28 15:30:07.927535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.386 qpair failed and we were unable to recover it. 00:34:21.386 [2024-10-28 15:30:07.927737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.386 [2024-10-28 15:30:07.927803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.386 qpair failed and we were unable to recover it. 00:34:21.386 [2024-10-28 15:30:07.928003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.386 [2024-10-28 15:30:07.928030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.386 qpair failed and we were unable to recover it. 00:34:21.386 [2024-10-28 15:30:07.928157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.386 [2024-10-28 15:30:07.928192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.386 qpair failed and we were unable to recover it. 00:34:21.386 [2024-10-28 15:30:07.928408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.386 [2024-10-28 15:30:07.928471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.386 qpair failed and we were unable to recover it. 00:34:21.386 [2024-10-28 15:30:07.928697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.928762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.928970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.928996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.929188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.929252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.929502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.929568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.929821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.929887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.930139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.930165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.930362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.930427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.930734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.930810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.931065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.931134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.931409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.931434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.931700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.931766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.931970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.932041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.932311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.932377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.932678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.932704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.932858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.932922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.933168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.933233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.933480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.933544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.933831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.933860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.934012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.934085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.934300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.934365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.934675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.934741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.935004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.935030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.935270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.935335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.935588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.935668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.935923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.935987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.936251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.936277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.936432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.936506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.936729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.936756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.936907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.936933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.937146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.937187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.937334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.937369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.937531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.937567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.937699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.937735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.937892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.937917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.938118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.938180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.938466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.938531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.938758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.938794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.938944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.938984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.939173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.939237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.939469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.387 [2024-10-28 15:30:07.939534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.387 qpair failed and we were unable to recover it. 00:34:21.387 [2024-10-28 15:30:07.939756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.939792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.939979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.940020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.940207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.940273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.940502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.940567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.940834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.940871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.941042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.941067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.941200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.941225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.941448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.941513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.941721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.941757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.941880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.941906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.942065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.942090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.942258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.942323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.942559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.942625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.942848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.942873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.943136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.943200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.943509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.943574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.943880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.943916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.944088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.944117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.944304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.944370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.944685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.944739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.944892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.944927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.945113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.945140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.945246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.945272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.945462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.945526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.945765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.945791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.945925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.945965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.946122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.946186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.946364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.946429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.946723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.946759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.946908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.946933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.947079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.947104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.947304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.947367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.947603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.947680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.947824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.947850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.948012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.948037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.948188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.948253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.948499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.948564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.948817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.948843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.949027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.949092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.388 [2024-10-28 15:30:07.949326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.388 [2024-10-28 15:30:07.949389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.388 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.949630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.949725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.949885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.949911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.950043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.950068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.950332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.950397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.950630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.950717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.950878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.950904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.950998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.951024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.951146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.951211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.951448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.951511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.951738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.951764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.951897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.951939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.952125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.952189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.952425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.952490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.952719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.952746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.952875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.952920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.953114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.953179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.953369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.953433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.953694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.953725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.953855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.953890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.954033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.954096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.954306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.954370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.954604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.954643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.954788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.954823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.954998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.955063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.955355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.955420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.955712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.955738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.955911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.955957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.956300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.956366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.956679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.956748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.956875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.956902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.957029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.957055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.957221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.957285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.957462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.957527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.957750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.957777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.957909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.957935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.958179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.958244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.958462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.958526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.958775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.958801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.958989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.959053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.959259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.389 [2024-10-28 15:30:07.959323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.389 qpair failed and we were unable to recover it. 00:34:21.389 [2024-10-28 15:30:07.959545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.959609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.959834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.959860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.959962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.959988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.960194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.960259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.960490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.960554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.960801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.960828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.960980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.961044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.961325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.961389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.961686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.961737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.961864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.961891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.962025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.962069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.962337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.962400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.962717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.962755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.962903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.962929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.963130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.963194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.963505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.963571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.963826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.963862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.963977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.964024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.964187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.964213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.964464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.964528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.964774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.964809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.964958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.964984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.965154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.965194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.965419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.965484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.965716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.965752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.965915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.965941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.966098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.966162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.966410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.966474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.966687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.966742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.966903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.966929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.967072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.967129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.967348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.967412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.967629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.967723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.967879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.967905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.968099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.968164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.968410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.968475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.968729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.968765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.390 [2024-10-28 15:30:07.968914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.390 [2024-10-28 15:30:07.968939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.390 qpair failed and we were unable to recover it. 00:34:21.391 [2024-10-28 15:30:07.969040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.391 [2024-10-28 15:30:07.969066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.391 qpair failed and we were unable to recover it. 00:34:21.391 [2024-10-28 15:30:07.969233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.391 [2024-10-28 15:30:07.969298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.391 qpair failed and we were unable to recover it. 00:34:21.391 [2024-10-28 15:30:07.969505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.391 [2024-10-28 15:30:07.969569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.391 qpair failed and we were unable to recover it. 00:34:21.391 [2024-10-28 15:30:07.969827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.391 [2024-10-28 15:30:07.969853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.391 qpair failed and we were unable to recover it. 00:34:21.391 [2024-10-28 15:30:07.969949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.391 [2024-10-28 15:30:07.969975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.391 qpair failed and we were unable to recover it. 00:34:21.391 [2024-10-28 15:30:07.970098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.391 [2024-10-28 15:30:07.970146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.391 qpair failed and we were unable to recover it. 00:34:21.391 [2024-10-28 15:30:07.970361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.391 [2024-10-28 15:30:07.970427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.391 qpair failed and we were unable to recover it. 00:34:21.391 [2024-10-28 15:30:07.970693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.391 [2024-10-28 15:30:07.970739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.391 qpair failed and we were unable to recover it. 00:34:21.391 [2024-10-28 15:30:07.970873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.391 [2024-10-28 15:30:07.970899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.391 qpair failed and we were unable to recover it. 00:34:21.391 [2024-10-28 15:30:07.971071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.391 [2024-10-28 15:30:07.971135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.391 qpair failed and we were unable to recover it. 00:34:21.391 [2024-10-28 15:30:07.971376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.391 [2024-10-28 15:30:07.971441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.391 qpair failed and we were unable to recover it. 00:34:21.391 [2024-10-28 15:30:07.971679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.391 [2024-10-28 15:30:07.971733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.391 qpair failed and we were unable to recover it. 00:34:21.391 [2024-10-28 15:30:07.971900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.391 [2024-10-28 15:30:07.971926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.391 qpair failed and we were unable to recover it. 00:34:21.391 [2024-10-28 15:30:07.972132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.391 [2024-10-28 15:30:07.972198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.391 qpair failed and we were unable to recover it. 00:34:21.391 [2024-10-28 15:30:07.972403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.391 [2024-10-28 15:30:07.972468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.391 qpair failed and we were unable to recover it. 00:34:21.391 [2024-10-28 15:30:07.972692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.391 [2024-10-28 15:30:07.972719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.391 qpair failed and we were unable to recover it. 00:34:21.391 [2024-10-28 15:30:07.972846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.391 [2024-10-28 15:30:07.972871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.391 qpair failed and we were unable to recover it. 00:34:21.391 [2024-10-28 15:30:07.973013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.391 [2024-10-28 15:30:07.973078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.391 qpair failed and we were unable to recover it. 00:34:21.391 [2024-10-28 15:30:07.973327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.391 [2024-10-28 15:30:07.973391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.391 qpair failed and we were unable to recover it. 00:34:21.391 [2024-10-28 15:30:07.973677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.391 [2024-10-28 15:30:07.973724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.391 qpair failed and we were unable to recover it. 00:34:21.391 [2024-10-28 15:30:07.973868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.391 [2024-10-28 15:30:07.973903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.391 qpair failed and we were unable to recover it. 00:34:21.391 [2024-10-28 15:30:07.974092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.391 [2024-10-28 15:30:07.974157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.391 qpair failed and we were unable to recover it. 00:34:21.391 [2024-10-28 15:30:07.974407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.391 [2024-10-28 15:30:07.974472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.391 qpair failed and we were unable to recover it. 00:34:21.391 [2024-10-28 15:30:07.974722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.391 [2024-10-28 15:30:07.974749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.974890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.974925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.975174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.975239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.975473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.975538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.975757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.975783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.975943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.975968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.976137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.976202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.976463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.976527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.976735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.976761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.976894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.976920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.977073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.977138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.977382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.977446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.977666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.977691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.977838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.977863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.978025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.978089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.978302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.978367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.978692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.978738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.978927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.978976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.979208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.979273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.979519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.979584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.979887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.979913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.980051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.980120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.980421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.980486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.980771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.980812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.980969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.981011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.981122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.981147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.981350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.981414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.981684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.981738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.981899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.981925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.982081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.982121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.982270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.982334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.982571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.982636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.982863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.982890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.983068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.983132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.983361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.983425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.983610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.983692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.983867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.983892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.984061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.984111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.984351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.984416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.984678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.392 [2024-10-28 15:30:07.984745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.392 qpair failed and we were unable to recover it. 00:34:21.392 [2024-10-28 15:30:07.984929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.984954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.985075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.985099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.985314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.985379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.985557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.985621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.985904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.985931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.986102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.986165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.986421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.986486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.986700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.986766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.986972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.986997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.987117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.987142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.987363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.987428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.987678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.987744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.987988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.988014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.988197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.988262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.988470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.988533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.988813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.988883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.989138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.989163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.989317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.989357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.989533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.989598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.989796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.989862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.990120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.990144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.990319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.990383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.990604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.990686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.990895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.990970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.991167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.991191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.991339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.991363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.991564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.991629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.991888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.991952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.992155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.992180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.992330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.992356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.992606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.992700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.992887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.992913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.993053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.993079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.993265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.993330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.993567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.993633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.993892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.993958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.994226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.994251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.994414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.994485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.994694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.994762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.994998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.393 [2024-10-28 15:30:07.995063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.393 qpair failed and we were unable to recover it. 00:34:21.393 [2024-10-28 15:30:07.995308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:07.995347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:07.995500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:07.995574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:07.995824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:07.995889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:07.996108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:07.996174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:07.996391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:07.996417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:07.996553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:07.996605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:07.996840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:07.996906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:07.997137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:07.997200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:07.997396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:07.997421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:07.997563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:07.997589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:07.997842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:07.997907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:07.998121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:07.998186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:07.998393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:07.998417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:07.998621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:07.998700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:07.998950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:07.999015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:07.999210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:07.999276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:07.999521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:07.999545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:07.999742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:07.999809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:08.000119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:08.000185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:08.000396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:08.000461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:08.000755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:08.000781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:08.000904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:08.000970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:08.001183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:08.001261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:08.001474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:08.001551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:08.001774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:08.001801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:08.001932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:08.001959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:08.002200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:08.002266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:08.002519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:08.002593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:08.002852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:08.002878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:08.003073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:08.003148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:08.003374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:08.003448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:08.003682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:08.003733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:08.003881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:08.003906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:08.004043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:08.004102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:08.004323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:08.004390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:08.004617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:08.004733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.394 [2024-10-28 15:30:08.004993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.394 [2024-10-28 15:30:08.005018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.394 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.005242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.005312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.005558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.005624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.005899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.005971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.006204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.006230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.006430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.006495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.006750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.006818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.007014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.007081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.007318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.007347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.007529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.007606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.007844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.007912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.008127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.008193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.008418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.008442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.008669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.008744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.008980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.009055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.009295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.009362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.009602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.009642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.009760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.009826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.010086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.010161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.010408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.010475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.010731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.010758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.010954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.011021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.011261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.011331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.011584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.011667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.011925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.011952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.012136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.012202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.012431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.012497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.012760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.012849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.013055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.013096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.013288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.013362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.013594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.013681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.013905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.013972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.014207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.014250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.014408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.014480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.014729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.014797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.015000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.015067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.015331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.015357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.015544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.015615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.015815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.015843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.016022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.016088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.016327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.016373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.016567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.016637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.016910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.016978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.017189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.017255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.017524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.017550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.017720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.017786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.018008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.018074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.018323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.018390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.018624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.018656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.018818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.018885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.019124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.019200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.019512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.019579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.019908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.019935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.020131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.020197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.020463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.020535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.020790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.020858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.021114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.021140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.021294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.021382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.021637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.021724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.021919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.021985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.022222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.022250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.022345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.395 [2024-10-28 15:30:08.022371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.395 qpair failed and we were unable to recover it. 00:34:21.395 [2024-10-28 15:30:08.022565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.022639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.022900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.022967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.023245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.023275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.023425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.023474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.023708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.023777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.023961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.024039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.024249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.024274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.024477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.024549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.024820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.024894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.025132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.025199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.025435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.025458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.025590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.025677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.025844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.025869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.026070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.026136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.026400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.026427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.026600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.026690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.026879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.026957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.027196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.027263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.027480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.027520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.027618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.027670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.027830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.027896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.028147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.028216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.028463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.028488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.028619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.028714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.028950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.029016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.029258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.029334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.029565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.029589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.029775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.029842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.030114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.030183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.030395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.030469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.030710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.030736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.030874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.030916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.031132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.031200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.031409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.031475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.031729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.031756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.031930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.031997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.032303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.032376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.032634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.032722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.033012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.033037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.033255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.033330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.033561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.033626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.033905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.033975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.034181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.034206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.034373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.034413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.034568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.034646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.034929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.035014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.035232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.035257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.035390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.035440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.035721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.035747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.035910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.035936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.036053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.036079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.036189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.036218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.036386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.036461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.036683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.036752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.037018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.037045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.037153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.037235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.037433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.037500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.037722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.037797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.038028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.038053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.038241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.038267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.038471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.038540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.038824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.038893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.039111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.039137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.039264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.039311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.039531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.039597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.039862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.039941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.040154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.040179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.040297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.040326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.040543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.040608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.040887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.040963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.041178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.041204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.041347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.041407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.041676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.041745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.042007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.042075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.042269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.042295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.042404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.042430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.396 [2024-10-28 15:30:08.042601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.396 [2024-10-28 15:30:08.042719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.396 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.042952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.043020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.043230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.043255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.043398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.043423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.043639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.043728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.043989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.044062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.044244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.044268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.044439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.044489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.044689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.044715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.044833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.044870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.045017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.045044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.045262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.045341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.045618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.045712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.046039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.046109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.046380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.046406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.046522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.046562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.046742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.046810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.047038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.047104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.047329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.047359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.047564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.047632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.047923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.047993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.048212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.048282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.048500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.048525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.048709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.048768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.049003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.049074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.049302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.049374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.049647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.049690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.049858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.049936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.050160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.050228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.050472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.050543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.050759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.050786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.050952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.051010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.051181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.051247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.051462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.051530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.051793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.051820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.051941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.051983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.052181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.052258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.052505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.052571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.052840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.052870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.053035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.053110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.053351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.053417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.053640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.053757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.054021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.054045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.054212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.054279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.054538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.054605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.054843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.054870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.055007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.055034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.055226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.055292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.055536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.055611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.055859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.055948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.056155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.056181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.056354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.056422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.056682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.056761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.057088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.057157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.057380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.057410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.057563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.057626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.057869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.057936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.058172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.058239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.058477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.058504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.058644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.058743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.058972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.059039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.059288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.059359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.059580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.059619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.059805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.059879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.060057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.060123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.060380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.060446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.397 [2024-10-28 15:30:08.060727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.397 [2024-10-28 15:30:08.060755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.397 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.060940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.061006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.061231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.061308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.061539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.061606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.061873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.061904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.062036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.062061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.062302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.062378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.062590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.062674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.062884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.062911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.063071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.063114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.063315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.063383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.063618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.063711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.063921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.063948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.064094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.064119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.064371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.064437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.064707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.064734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.064830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.064857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.064993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.065020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.065251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.065320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.065579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.065646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.065894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.065920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.066068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.066134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.066349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.066415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.066645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.066764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.066995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.067021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.067139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.067165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.067429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.067499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.067689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.067757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.067986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.068028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.068203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.068269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.068464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.068541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.068808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.068877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.069136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.069162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.069276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.069352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.069577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.069648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.069891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.069959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.070196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.070239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.070417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.070484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.070711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.070781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.070997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.071072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.071290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.071331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.071487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.071554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.071800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.071869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.072121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.072189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.072412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.072437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.072572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.072630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.072895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.072965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.073204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.073270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.073525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.073600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.073786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.073813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.073979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.074046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.074297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.074376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.074633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.074718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.074877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.074904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.075100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.075169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.075343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.075410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.075709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.075756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.075886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.075920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.076086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.076164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.076374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.076441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.076682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.076709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.076830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.076918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.077168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.077235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.077485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.077570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.077865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.077893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.078031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.078057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.078203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.078230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.078357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.078383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.078535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.078577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.078707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.078735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.078860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.078888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.079044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.079117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.079326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.079353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.079483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.079509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.079733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.079761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.079892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.079920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.080097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.080137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.398 [2024-10-28 15:30:08.080345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.398 [2024-10-28 15:30:08.080414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.398 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.080697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.080775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.081016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.081084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.081286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.081313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.081482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.081562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.081839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.081909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.082150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.082218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.082446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.082472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.082614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.082696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.082916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.082984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.083210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.083276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.083453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.083478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.083665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.083693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.083794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.083821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.084004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.084078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.084285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.084310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.084496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.084538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.084745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.084825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.085065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.085133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.085375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.085402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.085534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.085561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.085745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.085773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.085916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.085942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.086136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.086163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.086285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.086313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.086478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.086510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.086663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.086695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.086819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.086846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.087024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab530 is same with the state(6) to be set 00:34:21.399 [2024-10-28 15:30:08.087414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.087466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.087612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.087643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.087803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.087830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.087953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.087995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.088130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.088159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.088295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.088320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.088440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.088465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.088579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.088608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.088753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.088778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.088914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.088940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.089177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.089214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.089351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.089384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.089563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.089591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.089774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.089801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.089896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.089922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.090029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.090055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.090198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.090263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.090470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.090495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.090667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.090700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.090798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.090825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.090963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.090988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.091175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.091240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.091431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.091497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.091706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.091753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.091861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.091890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.092012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.092082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.092286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.092352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.092584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.092610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.092759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.092785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.092878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.092904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.093103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.093127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.093308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.093336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.093435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.093464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.093585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.093611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.093732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.093759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.093845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.093871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.094015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.094040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.094156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.094232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.094456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.094521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.094732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.094759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.094853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.094879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.095019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.095083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.095296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.095321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.095448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.095484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.095723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.095755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.095879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.095914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.096079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.096161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.399 qpair failed and we were unable to recover it. 00:34:21.399 [2024-10-28 15:30:08.096393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.399 [2024-10-28 15:30:08.096465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.096649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.096688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.096789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.096816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.096963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.096993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.097117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.097180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.097340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.097414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.097564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.097628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.097807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.097837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.098000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.098064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.098235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.098261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.098416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.098442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.098539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.098568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.098656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.098688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.098800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.098828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.098987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.099025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.099147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.099178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.099310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.099338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.099502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.099535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.099677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.099712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.099854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.099881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.099988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.100030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.100169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.100196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.100348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.100378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.100502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.100528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.100625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.100658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.100765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.100793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.100891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.100919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.101054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.101081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.101213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.101260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.101392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.101431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.101559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.101585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.101705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.101732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.101832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.101859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.101986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.102028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.102196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.102220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.102360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.102386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.102494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.102525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.102670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.102698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.102827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.102853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.103024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.103049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.103179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.103203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.103390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.103457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.103716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.103747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.103897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.103925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.104027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.104068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.104231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.104302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.104537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.104562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.104679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.104707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.104832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.104860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.104997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.105038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.105187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.105213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.105357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.105382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.105528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.105603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.105825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.105865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.105977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.106004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.106125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.106152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.106302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.106328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.106453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.106479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.106605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.106631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.106785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.106812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.106979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.107015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.107154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.107183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.107296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.107375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.107553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.107580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.107679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.107705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.107815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.107844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.108010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.108076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.108243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.108307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.108461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.108487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.400 [2024-10-28 15:30:08.108706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.400 [2024-10-28 15:30:08.108735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.400 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.108856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.108901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.109046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.109110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.109242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.109267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.109429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.109455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.109709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.109738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.109841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.109870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.110012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.110038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.110208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.110236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.110361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.110394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.110497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.110523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.110676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.110718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.110852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.110879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.111028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.111068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.111162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.111187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.111355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.111381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.111501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.111542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.111712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.111779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.111971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.112020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.112215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.112261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.112430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.112455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.112571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.112596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.112772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.112838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.113061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.113127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.113324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.113388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.113565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.113589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.113699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.113776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.113927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.113996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.114171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.114196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.114313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.114339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.114523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.114549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.114663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.114695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.114781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.114807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.114986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.115051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.115280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.115344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.115575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.115615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.115758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.115785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.115962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.116026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.116202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.116268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.116474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.116538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.116704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.116730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.116828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.116887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.117044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.117100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.117305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.117368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.117517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.117542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.117711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.117750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.117882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.117909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.118083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.118110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.118289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.118361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.118490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.118520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.118623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.118658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.118820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.118889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.119027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.119084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.119269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.119334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.119467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.119491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.119626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.119672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.119823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.119888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.120092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.120167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.120301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.120346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.120449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.120475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.120609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.120660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.120843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.120912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.121108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.121177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.121397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.121474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.121657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.121686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.121795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.121822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.121972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.121997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.122098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.122123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.122302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.122373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.122617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.122720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.122856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.122890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.123060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.123136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.123403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.123471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.123722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.123750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.123864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.123894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.124028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.124096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.124340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.124407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.401 qpair failed and we were unable to recover it. 00:34:21.401 [2024-10-28 15:30:08.124620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.401 [2024-10-28 15:30:08.124671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.124834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.124861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.125048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.125123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.125394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.125460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.125721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.125749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.125905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.125968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.126210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.126276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.126577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.126645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.126867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.126896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.127065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.127091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.127203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.127230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.127416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.127483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.127757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.127784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.127892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.127919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.128077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.128154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.128408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.128475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.128749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.128776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.128878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.128907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.129056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.129082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.129226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.129252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.129392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.129473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.129733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.129761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.129867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.129895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.130078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.130111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.130311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.130381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.130638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.130741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.130931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.130980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.131158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.131193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.131341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.131375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.131541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.131615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.131807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.131848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.131993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.132043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.132218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.132267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.132390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.132439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.132570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.132597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.132725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.132753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.132886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.132937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.133051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.133101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.133232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.133258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.133395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.133422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.133579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.133609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.133776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.133804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.133979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.134006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.134135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.134172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.134326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.134354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.134479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.134506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.134639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.134679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.134799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.134836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.135019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.135074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.135240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.135280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.135398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.135426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.135577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.135604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.135732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.135770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.135881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.135930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.136101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.136148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.136297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.136324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.136449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.136485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.136607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.136633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.136747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.136774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.136875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.136902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.137027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.137059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.137184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.137211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.137362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.137389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.137515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.137547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.137700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.137739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.137890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.137940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.138134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.138170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.138372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.138399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.138567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.138594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.138758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.402 [2024-10-28 15:30:08.138796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.402 qpair failed and we were unable to recover it. 00:34:21.402 [2024-10-28 15:30:08.138979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.139015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.139190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.139233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.139379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.139413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.139547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.139580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.139693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.139745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.139902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.139938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.140121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.140157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.140290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.140333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.140549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.140575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.140733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.140770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.140883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.140935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.141091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.141127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.141295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.141322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.141420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.141457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.141585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.141611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.141751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.141778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.141875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.141902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.142033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.142063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.142204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.142231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.142403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.142430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.142581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.142616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.142738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.142765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.142916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.142944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.143077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.143151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.143342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.143408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.143608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.143635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.143790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.143826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.144020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.144086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.144282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.144347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.144520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.144553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.144647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.144687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.144774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.144820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.144979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.145043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.145253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.145320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.145507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.145533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.145674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.145707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.145830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.145875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.146045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.146110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.146279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.146344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.146473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.146499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.146666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.146712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.146832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.146859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.146961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.146987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.147110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.147136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.147297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.147338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.147431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.147457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.147630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.147677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.147781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.147807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.147907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.147933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.148079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.148104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.148247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.148312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.148490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.148540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.148657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.148711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.148844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.148870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.149024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.149049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.149169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.149194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.149333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.149359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.149479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.149505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.149643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.149677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.149772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.149798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.149903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.149943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.150075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.150101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.150216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.150241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.150366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.150390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.150555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.150580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.150674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.150701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.150851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.150917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.151120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.151160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.151341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.151366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.151484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.151509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.151704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.151763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.151943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.152009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.152180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.152244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.152434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.152461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.152606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.152633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.152791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.152856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.153068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.403 [2024-10-28 15:30:08.153131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.403 qpair failed and we were unable to recover it. 00:34:21.403 [2024-10-28 15:30:08.153342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.153407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.153587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.153701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.153848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.153914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.154155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.154220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.154450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.154515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.154666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.154732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.154883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.154909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.155058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.155102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.155256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.155282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.155493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.155558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.155731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.155758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.155841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.155868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.155995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.156021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.156187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.156212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.156333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.156357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.156528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.156569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.156695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.156721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.156845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.156871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.156967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.156993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.157133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.157173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.157303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.157328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.157492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.157533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.157695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.157723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.157825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.157851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.158004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.158030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.158181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.158208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.158373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.158399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.158555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.158581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.158686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.158713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.158838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.158864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.159029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.159069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.159195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.159229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.159368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.159395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.159526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.159557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.159690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.159718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.159822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.159848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.159972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.159998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.160123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.160149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.160273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.160300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.160449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.160476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.160626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.160657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.160788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.160814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.160911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.160937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.161094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.161120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.161256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.161283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.161405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.161470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.161627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.161664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.161764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.161791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.161929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.161962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.162121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.162152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.162285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.162321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.162457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.162494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.162602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.162628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.162772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.162808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.162977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.163013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.163171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.163207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.163327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.163353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.163490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.163555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.163752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.163790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.163944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.164012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.164236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.164302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.164522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.164588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.164793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.164837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.165047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.165116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.165318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.165385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.165557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.165621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.165792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.165830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.166008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.166087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.166328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.166412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.166601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.404 [2024-10-28 15:30:08.166683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.404 qpair failed and we were unable to recover it. 00:34:21.404 [2024-10-28 15:30:08.166825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.166852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.167019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.167070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.167246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.167271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.167418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.167449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.167573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.167599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.167719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.167759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.167893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.167921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.168027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.168052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.168176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.168201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.168300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.168324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.168444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.168468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.168621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.168670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.168769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.168795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.168918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.168943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.169111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.169136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.169225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.169251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.169373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.169398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.169554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.169580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.169714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.169741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.169872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.169898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.170058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.170084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.170194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.170220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.170388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.170412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.170531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.170556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.170681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.170707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.170802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.170828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.170982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.171022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.171178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.171203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.171315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.171341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.171493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.171519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.171615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.171645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.171753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.171779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.171883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.171909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.172062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.172097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.172220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.172245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.172366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.172391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.172547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.172573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.172704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.172731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.172845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.172871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.172970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.172996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.173142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.173167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.173283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.173308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.173460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.173485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.173603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.173628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.173762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.173802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.173940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.173968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.174103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.174145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.174269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.174296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.174401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.174427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.174560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.174586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.174701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.174728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.174829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.174855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.174961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.174986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.175106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.175138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.175258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.175283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.175367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.175393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.175495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.175523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.175627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.175675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.175793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.175820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.175941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.175968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.176086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.176118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.176220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.176247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.176376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.176402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.176546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.176571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.176687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.176715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.176823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.176850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.176972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.176997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.177147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.177173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.177255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.177281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.177395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.177420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.177585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.177611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.177734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.177765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.177870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.177896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.177997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.178037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.178164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.178191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.178337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.178361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.178497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.178522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.405 qpair failed and we were unable to recover it. 00:34:21.405 [2024-10-28 15:30:08.178621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.405 [2024-10-28 15:30:08.178668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.178791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.178816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.178917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.178943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.180148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.180227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.180413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.180440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.180608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.180633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.180769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.180795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.180887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.180918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.181048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.181088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.181197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.181222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.181400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.181424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.181548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.181572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.181690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.181717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.181823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.181849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.181965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.181990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.182163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.182188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.182360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.182399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.182550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.182615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.182756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.182783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.182874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.182900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.183020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.183052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.183154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.183177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.183349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.183375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.183553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.183619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.183755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.183781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.183939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.183978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.184064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.184089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.184238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.184263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.184361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.184385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.184523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.184548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.184673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.184699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.184793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.184820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.184978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.185004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.185142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.185166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.185271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.185296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.185404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.185430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.185537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.185562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.185704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.185731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.185853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.185880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.186028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.186054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.186150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.186191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.186309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.186351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.186470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.186497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.186629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.186660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.186758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.186784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.186907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.186933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.187064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.187089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.187224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.187248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.187399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.187425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.187558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.187584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.187689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.187715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.187802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.187828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.187958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.187999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.188137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.188200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.188403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.188466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.188683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.188709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.188805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.188831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.188985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.189048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.189262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.189332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.189493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.189519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.189646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.189690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.189798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.189824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.189940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.189965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.190115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.190179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.190423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.190488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.190731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.190757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.190857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.190884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.191007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.191033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.191146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.191171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.191379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.191443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.191647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.191680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.191789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.191814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.191971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.192035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.192227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.192293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.192524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.192588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.192826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.192857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.192941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.192967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.193140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.406 [2024-10-28 15:30:08.193210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.406 qpair failed and we were unable to recover it. 00:34:21.406 [2024-10-28 15:30:08.193440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.193505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.193741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.193767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.193870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.193896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.194058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.194122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.194355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.194423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.194534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.194559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.194681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.194708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.194810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.194836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.194971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.194996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.195158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.195185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.195443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.195506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.195724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.195750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.195855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.195881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.196014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.196040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.196166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.196234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.196446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.196510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.196680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.196708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.196837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.196862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.197052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.197116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.197366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.197430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.197627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.197717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.197815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.197841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.197960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.197985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.198173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.198237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.198438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.198512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.198716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.198742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.198847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.198873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.199018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.199083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.199306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.199371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.199575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.199639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.199794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.199819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.199965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.199990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.200218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.200282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.200534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.200598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.200786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.200812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.200916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.200968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.201171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.201235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.201412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.201476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.201681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.201708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.201800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.201826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.203227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.203307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.203530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.203596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.203777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.203804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.203909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.203935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.204088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.204154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.204391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.204455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.204640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.204742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.204836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.204862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.204981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.205019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.205232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.205295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.205465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.205528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.205718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.205750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.205856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.205882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.206027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.206091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.206344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.206416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.206669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.206735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.206841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.206867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.206970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.206997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.207258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.207334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.207504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.207568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.207784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.207811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.207917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.207943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.208082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.208146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.208387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.208461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.208711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.208738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.208830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.208856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.208980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.209006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.209156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.209181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.209354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.209418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.209604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.209630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.209754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.407 [2024-10-28 15:30:08.209780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.407 qpair failed and we were unable to recover it. 00:34:21.407 [2024-10-28 15:30:08.209880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.209906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.210091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.210116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.210297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.210322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.210425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.210489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.210665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.210691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.210819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.210845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.210964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.211024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.211232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.211296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.211518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.211580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.211778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.211805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.211904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.211929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.212042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.212066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.212249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.212275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.212431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.212498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.212714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.212739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.212841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.212868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.213014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.213039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.213135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.213184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.213420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.213484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.213663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.213689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.213785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.213811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.213912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.213953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.214107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.214167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.214405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.214469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.214687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.214737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.214838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.214865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.214958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.214984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.215145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.215209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.215421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.215485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.215667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.215714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.215815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.215841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.215979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.216004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.216164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.216229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.216459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.216523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.216723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.216750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.216851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.216877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.217035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.217099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.217326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.217391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.217779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.217807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.217922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.217972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.219253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.219329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.219580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.219647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.219803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.219829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.219930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.219970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.220055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.220081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.220258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.220324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.220589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.220692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.220790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.220816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.220920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.220951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.221115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.221140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.221258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.221282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.221461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.221526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.221705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.221731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.221830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.221856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.222005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.222071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.222264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.222337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.222556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.222621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.222782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.222809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.222904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.222930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.223137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.223206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.223517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.223581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.223770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.223797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.223897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.223924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.224071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.224135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.224320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.224384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.224592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.224673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.224802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.224828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.224923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.224949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.225058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.225084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.225296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.225360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.225547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.225613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.225778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.225804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.225893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.225920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.226116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.226181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.226396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.226464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.226692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.408 [2024-10-28 15:30:08.226726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.408 qpair failed and we were unable to recover it. 00:34:21.408 [2024-10-28 15:30:08.226828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.226853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.226985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.227058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.227306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.227373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.227546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.227607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.227776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.227802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.227905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.227932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.228045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.228070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.228268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.228302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.228435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.228485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.228612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.228664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.228797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.228830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.228921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.228947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.229035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.229061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.229162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.229189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.229304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.229338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.229502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.229538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.229682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.229730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.229818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.229844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.229939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.229964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.230084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.230110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.230300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.230365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.230607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.230670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.230782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.230810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.230911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.230939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.231062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.231132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.231336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.231405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.231640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.231731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.231833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.231860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.231991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.232028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.232219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.232284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.232553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.232618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.232810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.232841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.232949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.232992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.233130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.233184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.233369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.233428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.409 [2024-10-28 15:30:08.233628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.409 [2024-10-28 15:30:08.233666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.409 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.233778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.233807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.233913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.233950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.234097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.234124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.234222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.234249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.234405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.234436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.234531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.234558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.234692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.234720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.234844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.234880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.235076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.235135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.235286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.235324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.235474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.235513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.235633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.235668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.235762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.235789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.235882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.235909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.236060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.236114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.236305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.236352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.236487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.236534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.236658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.236690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.236802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.236854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.236995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.237042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.237129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.237154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.237302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.237328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.237460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.237495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.237587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.237614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.237736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.237764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.237865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.237892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.238009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.238036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.238175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.238201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.238327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.238354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.238451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.238478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.238618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.238668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.238779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.238808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.238897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.238924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.239084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.239111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.239237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.239264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.239410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.696 [2024-10-28 15:30:08.239437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.696 qpair failed and we were unable to recover it. 00:34:21.696 [2024-10-28 15:30:08.239555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.239581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.239684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.239712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.239797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.239823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.239926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.240001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.240232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.240298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.240499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.240565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.240785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.240814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.240919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.240946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.241138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.241189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.241323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.241372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.241497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.241524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.241643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.241676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.241779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.241816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.241969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.241994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.242134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.242158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.242275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.242300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.242408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.242449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.242551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.242576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.242705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.242733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.242857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.242884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.243004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.243029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.243121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.243166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.243305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.243345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.243444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.243470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.243573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.243600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.243692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.243719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.243816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.243842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.243975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.244015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.244187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.244215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.244353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.244380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.244480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.244515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.244675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.244740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.244865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.244904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.245112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.245179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.245374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.245439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.245676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.245731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.245841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.697 [2024-10-28 15:30:08.245876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.697 qpair failed and we were unable to recover it. 00:34:21.697 [2024-10-28 15:30:08.246024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.246088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.246300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.246365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.246592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.246618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.246734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.246762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.246856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.246883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.247041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.247106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.247333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.247397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.247623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.247721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.247824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.247851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.247999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.248060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.248266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.248332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.248606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.248694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.248806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.248832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.248954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.249020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.249241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.249306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.249534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.249598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.249790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.249818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.249918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.249945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.250089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.250141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.250383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.250448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.250618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.250668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.250784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.250810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.250950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.250986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.251243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.251308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.251520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.251595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.251788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.251814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.251917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.251943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.252082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.252127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.252347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.252412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.252607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.252631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.252747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.252773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.252878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.252904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.253091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.253115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.253305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.253368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.253690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.253738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.253826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.253852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.253940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.253980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.254141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.254206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.254439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.698 [2024-10-28 15:30:08.254504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.698 qpair failed and we were unable to recover it. 00:34:21.698 [2024-10-28 15:30:08.254719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.254748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.254871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.254898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.255090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.255114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.255223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.255247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.255338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.255362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.255555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.255579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.255711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.255750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.255851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.255879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.256005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.256045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.256163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.256200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.256457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.256522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.256746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.256772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.256867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.256894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.257182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.257248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.257520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.257584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.257777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.257803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.257904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.257930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.258087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.258111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.258273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.258297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.258556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.258621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.258808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.258834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.258922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.258956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.259156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.259222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.259487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.259555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.259745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.259772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.259873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.259904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.260053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.260077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.260238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.260311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.260513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.260580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.260782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.260809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.260903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.260928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.261097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.261164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.261405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.261470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.261735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.261762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.261851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.261877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.699 [2024-10-28 15:30:08.262063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.699 [2024-10-28 15:30:08.262086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.699 qpair failed and we were unable to recover it. 00:34:21.700 [2024-10-28 15:30:08.262244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.700 [2024-10-28 15:30:08.262309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.700 qpair failed and we were unable to recover it. 00:34:21.700 [2024-10-28 15:30:08.262565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.700 [2024-10-28 15:30:08.262630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.700 qpair failed and we were unable to recover it. 00:34:21.700 [2024-10-28 15:30:08.262821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.700 [2024-10-28 15:30:08.262848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.700 qpair failed and we were unable to recover it. 00:34:21.700 [2024-10-28 15:30:08.263068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.700 [2024-10-28 15:30:08.263133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.700 qpair failed and we were unable to recover it. 00:34:21.700 [2024-10-28 15:30:08.263380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.700 [2024-10-28 15:30:08.263445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.700 qpair failed and we were unable to recover it. 00:34:21.700 [2024-10-28 15:30:08.263727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.700 [2024-10-28 15:30:08.263755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.700 qpair failed and we were unable to recover it. 00:34:21.700 [2024-10-28 15:30:08.263853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.700 [2024-10-28 15:30:08.263880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.700 qpair failed and we were unable to recover it. 00:34:21.700 [2024-10-28 15:30:08.264053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.700 [2024-10-28 15:30:08.264119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.700 qpair failed and we were unable to recover it. 00:34:21.700 [2024-10-28 15:30:08.264322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.700 [2024-10-28 15:30:08.264345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.700 qpair failed and we were unable to recover it. 00:34:21.700 [2024-10-28 15:30:08.264487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.700 [2024-10-28 15:30:08.264512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.700 qpair failed and we were unable to recover it. 00:34:21.700 [2024-10-28 15:30:08.264736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.700 [2024-10-28 15:30:08.264787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.700 qpair failed and we were unable to recover it. 00:34:21.700 [2024-10-28 15:30:08.264885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.700 [2024-10-28 15:30:08.264911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.700 qpair failed and we were unable to recover it. 00:34:21.700 [2024-10-28 15:30:08.265072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.700 [2024-10-28 15:30:08.265135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.700 qpair failed and we were unable to recover it. 00:34:21.700 [2024-10-28 15:30:08.265396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.700 [2024-10-28 15:30:08.265462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.700 qpair failed and we were unable to recover it. 00:34:21.700 [2024-10-28 15:30:08.265723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.700 [2024-10-28 15:30:08.265749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.700 qpair failed and we were unable to recover it. 00:34:21.700 [2024-10-28 15:30:08.265880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.700 [2024-10-28 15:30:08.265916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.700 qpair failed and we were unable to recover it. 00:34:21.700 [2024-10-28 15:30:08.266131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.700 [2024-10-28 15:30:08.266197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.700 qpair failed and we were unable to recover it. 00:34:21.700 [2024-10-28 15:30:08.266440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.700 [2024-10-28 15:30:08.266473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.700 qpair failed and we were unable to recover it. 00:34:21.700 [2024-10-28 15:30:08.266688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.700 [2024-10-28 15:30:08.266758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.700 qpair failed and we were unable to recover it. 00:34:21.700 [2024-10-28 15:30:08.266885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.700 [2024-10-28 15:30:08.266921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.700 qpair failed and we were unable to recover it. 00:34:21.700 [2024-10-28 15:30:08.267125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.700 [2024-10-28 15:30:08.267169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.700 qpair failed and we were unable to recover it. 00:34:21.700 [2024-10-28 15:30:08.267348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.700 [2024-10-28 15:30:08.267417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.700 qpair failed and we were unable to recover it. 00:34:21.700 [2024-10-28 15:30:08.267631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.700 [2024-10-28 15:30:08.267718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.700 qpair failed and we were unable to recover it. 00:34:21.700 [2024-10-28 15:30:08.267830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.700 [2024-10-28 15:30:08.267856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.700 qpair failed and we were unable to recover it. 00:34:21.700 [2024-10-28 15:30:08.267954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.700 [2024-10-28 15:30:08.267993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.700 qpair failed and we were unable to recover it. 00:34:21.700 [2024-10-28 15:30:08.268164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.700 [2024-10-28 15:30:08.268228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.700 qpair failed and we were unable to recover it. 00:34:21.700 [2024-10-28 15:30:08.268439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.700 [2024-10-28 15:30:08.268462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.700 qpair failed and we were unable to recover it. 00:34:21.700 [2024-10-28 15:30:08.268595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.700 [2024-10-28 15:30:08.268619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.700 qpair failed and we were unable to recover it. 00:34:21.700 [2024-10-28 15:30:08.268794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.701 [2024-10-28 15:30:08.268820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.701 qpair failed and we were unable to recover it. 00:34:21.701 [2024-10-28 15:30:08.268916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.701 [2024-10-28 15:30:08.268946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.701 qpair failed and we were unable to recover it. 00:34:21.701 [2024-10-28 15:30:08.269098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.701 [2024-10-28 15:30:08.269122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.701 qpair failed and we were unable to recover it. 00:34:21.701 [2024-10-28 15:30:08.269338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.701 [2024-10-28 15:30:08.269403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.701 qpair failed and we were unable to recover it. 00:34:21.701 [2024-10-28 15:30:08.269698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.701 [2024-10-28 15:30:08.269741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.701 qpair failed and we were unable to recover it. 00:34:21.701 [2024-10-28 15:30:08.269841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.701 [2024-10-28 15:30:08.269867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.701 qpair failed and we were unable to recover it. 00:34:21.701 [2024-10-28 15:30:08.270014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.701 [2024-10-28 15:30:08.270079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.701 qpair failed and we were unable to recover it. 00:34:21.701 [2024-10-28 15:30:08.270295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.701 [2024-10-28 15:30:08.270319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.701 qpair failed and we were unable to recover it. 00:34:21.701 [2024-10-28 15:30:08.270464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.701 [2024-10-28 15:30:08.270545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.701 qpair failed and we were unable to recover it. 00:34:21.701 [2024-10-28 15:30:08.270765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.701 [2024-10-28 15:30:08.270810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.701 qpair failed and we were unable to recover it. 00:34:21.701 [2024-10-28 15:30:08.270908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.701 [2024-10-28 15:30:08.270934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.701 qpair failed and we were unable to recover it. 00:34:21.701 [2024-10-28 15:30:08.271135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.701 [2024-10-28 15:30:08.271202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.701 qpair failed and we were unable to recover it. 00:34:21.701 [2024-10-28 15:30:08.271397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.701 [2024-10-28 15:30:08.271460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.701 qpair failed and we were unable to recover it. 00:34:21.701 [2024-10-28 15:30:08.271730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.701 [2024-10-28 15:30:08.271757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.701 qpair failed and we were unable to recover it. 00:34:21.701 [2024-10-28 15:30:08.271881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.701 [2024-10-28 15:30:08.271916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.701 qpair failed and we were unable to recover it. 00:34:21.701 [2024-10-28 15:30:08.272130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.701 [2024-10-28 15:30:08.272195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.701 qpair failed and we were unable to recover it. 00:34:21.701 [2024-10-28 15:30:08.272435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.701 [2024-10-28 15:30:08.272459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.701 qpair failed and we were unable to recover it. 00:34:21.701 [2024-10-28 15:30:08.272576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.701 [2024-10-28 15:30:08.272632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.701 qpair failed and we were unable to recover it. 00:34:21.701 [2024-10-28 15:30:08.272813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.701 [2024-10-28 15:30:08.272839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.701 qpair failed and we were unable to recover it. 00:34:21.701 [2024-10-28 15:30:08.272936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.701 [2024-10-28 15:30:08.272977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.701 qpair failed and we were unable to recover it. 00:34:21.701 [2024-10-28 15:30:08.273082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.701 [2024-10-28 15:30:08.273106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.701 qpair failed and we were unable to recover it. 00:34:21.701 [2024-10-28 15:30:08.273252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.701 [2024-10-28 15:30:08.273317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.701 qpair failed and we were unable to recover it. 00:34:21.701 [2024-10-28 15:30:08.273542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.701 [2024-10-28 15:30:08.273566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.701 qpair failed and we were unable to recover it. 00:34:21.701 [2024-10-28 15:30:08.273737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.701 [2024-10-28 15:30:08.273763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.701 qpair failed and we were unable to recover it. 00:34:21.701 [2024-10-28 15:30:08.273863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.701 [2024-10-28 15:30:08.273890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.701 qpair failed and we were unable to recover it. 00:34:21.701 [2024-10-28 15:30:08.274013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.701 [2024-10-28 15:30:08.274053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.701 qpair failed and we were unable to recover it. 00:34:21.701 [2024-10-28 15:30:08.274205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.701 [2024-10-28 15:30:08.274268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.701 qpair failed and we were unable to recover it. 00:34:21.701 [2024-10-28 15:30:08.274606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.701 [2024-10-28 15:30:08.274722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.701 qpair failed and we were unable to recover it. 00:34:21.701 [2024-10-28 15:30:08.274864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.701 [2024-10-28 15:30:08.274891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.701 qpair failed and we were unable to recover it. 00:34:21.701 [2024-10-28 15:30:08.275045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.701 [2024-10-28 15:30:08.275069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.701 qpair failed and we were unable to recover it. 00:34:21.701 [2024-10-28 15:30:08.275309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.701 [2024-10-28 15:30:08.275373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.275583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.275615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.275755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.275782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.275881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.275907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.276064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.276088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.276250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.276275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.276564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.276629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.276806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.276833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.276919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.276953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.277131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.277196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.277442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.277481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.277617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.277663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.277771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.277797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.277882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.277908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.278007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.278032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.278197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.278241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.278397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.278437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.278635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.278741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.278916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.278981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.279205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.279228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.279430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.279494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.279734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.279761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.279876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.279902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.280043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.280081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.280271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.280337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.280553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.280576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.280700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.280727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.280905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.280971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.281175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.281199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.281386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.281451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.281741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.281808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.282141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.282164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.282344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.282409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.282688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.282777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.283043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.283066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.283266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.283338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.283714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.283781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.283957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.283998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.284140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.702 [2024-10-28 15:30:08.284168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.702 qpair failed and we were unable to recover it. 00:34:21.702 [2024-10-28 15:30:08.284442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.284516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.284728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.284755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.284855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.284882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.285117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.285183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.285406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.285429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.285610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.285701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.285916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.285982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.286265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.286289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.286500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.286564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.286794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.286821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.286922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.286948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.287107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.287146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.287290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.287355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.287543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.287567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.287710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.287736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.287917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.287983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.288194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.288218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.288370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.288429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.288696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.288762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.288947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.288986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.289128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.289152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.289377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.289444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.289682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.289722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.289807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.289833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.290015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.290088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.290287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.290311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.290515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.290538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.290802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.290871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.291120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.291144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.291334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.291398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.291609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.291719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.291907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.291932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.703 qpair failed and we were unable to recover it. 00:34:21.703 [2024-10-28 15:30:08.292089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.703 [2024-10-28 15:30:08.292113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.292372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.292437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.292698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.292750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.292849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.292873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.292987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.293043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.293225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.293249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.293388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.293412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.293717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.293794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.294005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.294029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.294274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.294341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.294567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.294632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.294861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.294886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.295008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.295033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.295272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.295337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.295580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.295604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.295747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.295772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.295871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.295911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.296070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.296094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.296263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.296333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.296552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.296617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.296816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.296840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.296964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.296988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.297099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.297164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.297427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.297450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.297623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.297708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.297920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.297985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.298212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.298236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.298427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.298491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.298699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.298773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.299049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.299073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.299177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.299214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.299444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.299509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.299705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.299730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.299823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.299847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.300001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.300067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.300320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.300351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.300527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.704 [2024-10-28 15:30:08.300591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.704 qpair failed and we were unable to recover it. 00:34:21.704 [2024-10-28 15:30:08.300816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.705 [2024-10-28 15:30:08.300881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.705 qpair failed and we were unable to recover it. 00:34:21.705 [2024-10-28 15:30:08.301121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.705 [2024-10-28 15:30:08.301144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.705 qpair failed and we were unable to recover it. 00:34:21.705 [2024-10-28 15:30:08.301323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.705 [2024-10-28 15:30:08.301387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.705 qpair failed and we were unable to recover it. 00:34:21.705 [2024-10-28 15:30:08.301623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.705 [2024-10-28 15:30:08.301700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.705 qpair failed and we were unable to recover it. 00:34:21.705 [2024-10-28 15:30:08.301904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.705 [2024-10-28 15:30:08.301929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.705 qpair failed and we were unable to recover it. 00:34:21.705 [2024-10-28 15:30:08.302164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.705 [2024-10-28 15:30:08.302227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.705 qpair failed and we were unable to recover it. 00:34:21.705 [2024-10-28 15:30:08.302529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.705 [2024-10-28 15:30:08.302594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.705 qpair failed and we were unable to recover it. 00:34:21.705 [2024-10-28 15:30:08.302828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.705 [2024-10-28 15:30:08.302854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.705 qpair failed and we were unable to recover it. 00:34:21.705 [2024-10-28 15:30:08.302974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.705 [2024-10-28 15:30:08.302998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.705 qpair failed and we were unable to recover it. 00:34:21.705 [2024-10-28 15:30:08.303256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.705 [2024-10-28 15:30:08.303321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.705 qpair failed and we were unable to recover it. 00:34:21.705 [2024-10-28 15:30:08.303550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.705 [2024-10-28 15:30:08.303624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.705 qpair failed and we were unable to recover it. 00:34:21.705 [2024-10-28 15:30:08.303847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.705 [2024-10-28 15:30:08.303871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.705 qpair failed and we were unable to recover it. 00:34:21.705 [2024-10-28 15:30:08.303990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.705 [2024-10-28 15:30:08.304056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.705 qpair failed and we were unable to recover it. 00:34:21.705 [2024-10-28 15:30:08.304270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.705 [2024-10-28 15:30:08.304293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.705 qpair failed and we were unable to recover it. 00:34:21.705 [2024-10-28 15:30:08.304453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.705 [2024-10-28 15:30:08.304491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.705 qpair failed and we were unable to recover it. 00:34:21.705 [2024-10-28 15:30:08.304667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.705 [2024-10-28 15:30:08.304734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.705 qpair failed and we were unable to recover it. 00:34:21.705 [2024-10-28 15:30:08.304931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.705 [2024-10-28 15:30:08.304956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.705 qpair failed and we were unable to recover it. 00:34:21.705 [2024-10-28 15:30:08.305105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.705 [2024-10-28 15:30:08.305129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.705 qpair failed and we were unable to recover it. 00:34:21.705 [2024-10-28 15:30:08.305427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.705 [2024-10-28 15:30:08.305504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.705 qpair failed and we were unable to recover it. 00:34:21.705 [2024-10-28 15:30:08.305780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.705 [2024-10-28 15:30:08.305805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.705 qpair failed and we were unable to recover it. 00:34:21.705 [2024-10-28 15:30:08.305934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.705 [2024-10-28 15:30:08.305958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.705 qpair failed and we were unable to recover it. 00:34:21.705 [2024-10-28 15:30:08.306205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.705 [2024-10-28 15:30:08.306271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.705 qpair failed and we were unable to recover it. 00:34:21.705 [2024-10-28 15:30:08.306507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.705 [2024-10-28 15:30:08.306531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.705 qpair failed and we were unable to recover it. 00:34:21.705 [2024-10-28 15:30:08.306636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.705 [2024-10-28 15:30:08.306687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.705 qpair failed and we were unable to recover it. 00:34:21.705 [2024-10-28 15:30:08.306918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.705 [2024-10-28 15:30:08.306985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.705 qpair failed and we were unable to recover it. 00:34:21.705 [2024-10-28 15:30:08.307247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.705 [2024-10-28 15:30:08.307273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.705 qpair failed and we were unable to recover it. 00:34:21.705 [2024-10-28 15:30:08.307522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.307587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.307817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.307842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.307968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.308002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.308138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.308218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.308459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.308525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.308754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.308790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.308908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.308946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.309089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.309155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.309399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.309423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.309604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.309686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.309889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.309956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.310151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.310174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.310350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.310424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.310749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.310819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.311045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.311077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.311191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.311215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.311469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.311544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.311798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.311823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.311925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.311950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.312160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.312236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.312541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.312565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.312750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.312817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.313065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.313130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.313379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.313402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.313561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.313639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.313887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.313953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.314184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.314208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.314395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.314459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.314742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.314778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.314878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.314902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.315124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.315193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.315451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.315516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.315725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.706 [2024-10-28 15:30:08.315751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.706 qpair failed and we were unable to recover it. 00:34:21.706 [2024-10-28 15:30:08.315884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.315944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.316165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.316233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.316503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.316527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.316731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.316797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.317016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.317081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.317286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.317309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.317415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.317439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.317621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.317700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.317909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.317933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.318081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.318119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.318299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.318369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.318552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.318576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.318716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.318742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.318881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.318946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.319235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.319258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.319472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.319542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.319776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.319843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.320092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.320130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.320306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.320371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.320607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.320685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.320860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.320884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.321065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.321126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.321357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.321423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.321681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.321744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.321857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.321881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.322096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.322162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.322341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.322364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.322535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.322583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.322774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.322800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.322889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.322913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.323029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.323053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.323246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.323325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.323591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.323614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.323759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.323784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.323877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.323902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.324104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.324128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.324267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.324309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.324531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.324596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.707 qpair failed and we were unable to recover it. 00:34:21.707 [2024-10-28 15:30:08.324827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.707 [2024-10-28 15:30:08.324851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.708 qpair failed and we were unable to recover it. 00:34:21.708 [2024-10-28 15:30:08.324973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.708 [2024-10-28 15:30:08.325022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.708 qpair failed and we were unable to recover it. 00:34:21.708 [2024-10-28 15:30:08.325339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.708 [2024-10-28 15:30:08.325405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.708 qpair failed and we were unable to recover it. 00:34:21.708 [2024-10-28 15:30:08.325672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.708 [2024-10-28 15:30:08.325702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.708 qpair failed and we were unable to recover it. 00:34:21.708 [2024-10-28 15:30:08.325799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.708 [2024-10-28 15:30:08.325836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.708 qpair failed and we were unable to recover it. 00:34:21.708 [2024-10-28 15:30:08.325991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.708 [2024-10-28 15:30:08.326056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.708 qpair failed and we were unable to recover it. 00:34:21.708 [2024-10-28 15:30:08.326289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.708 [2024-10-28 15:30:08.326312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.708 qpair failed and we were unable to recover it. 00:34:21.708 [2024-10-28 15:30:08.326449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.708 [2024-10-28 15:30:08.326525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.708 qpair failed and we were unable to recover it. 00:34:21.708 [2024-10-28 15:30:08.326722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.708 [2024-10-28 15:30:08.326792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.708 qpair failed and we were unable to recover it. 00:34:21.708 [2024-10-28 15:30:08.327123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.708 [2024-10-28 15:30:08.327147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.708 qpair failed and we were unable to recover it. 00:34:21.708 [2024-10-28 15:30:08.327322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.708 [2024-10-28 15:30:08.327386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.708 qpair failed and we were unable to recover it. 00:34:21.708 [2024-10-28 15:30:08.327599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.708 [2024-10-28 15:30:08.327676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.708 qpair failed and we were unable to recover it. 00:34:21.708 [2024-10-28 15:30:08.327883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.708 [2024-10-28 15:30:08.327907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.708 qpair failed and we were unable to recover it. 00:34:21.708 [2024-10-28 15:30:08.328020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.708 [2024-10-28 15:30:08.328044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.708 qpair failed and we were unable to recover it. 00:34:21.708 [2024-10-28 15:30:08.328247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.708 [2024-10-28 15:30:08.328313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.708 qpair failed and we were unable to recover it. 00:34:21.708 [2024-10-28 15:30:08.328515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.708 [2024-10-28 15:30:08.328539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.708 qpair failed and we were unable to recover it. 00:34:21.708 [2024-10-28 15:30:08.328676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.708 [2024-10-28 15:30:08.328700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.708 qpair failed and we were unable to recover it. 00:34:21.708 [2024-10-28 15:30:08.328888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.708 [2024-10-28 15:30:08.328953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.708 qpair failed and we were unable to recover it. 00:34:21.708 [2024-10-28 15:30:08.329196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.708 [2024-10-28 15:30:08.329219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.708 qpair failed and we were unable to recover it. 00:34:21.708 [2024-10-28 15:30:08.329321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.708 [2024-10-28 15:30:08.329345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.708 qpair failed and we were unable to recover it. 00:34:21.708 [2024-10-28 15:30:08.329674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.708 [2024-10-28 15:30:08.329740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.708 qpair failed and we were unable to recover it. 00:34:21.708 [2024-10-28 15:30:08.329912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.708 [2024-10-28 15:30:08.329937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.708 qpair failed and we were unable to recover it. 00:34:21.708 [2024-10-28 15:30:08.330091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.708 [2024-10-28 15:30:08.330120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.708 qpair failed and we were unable to recover it. 00:34:21.708 [2024-10-28 15:30:08.330302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.708 [2024-10-28 15:30:08.330367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.708 qpair failed and we were unable to recover it. 00:34:21.708 [2024-10-28 15:30:08.330639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.708 [2024-10-28 15:30:08.330738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.708 qpair failed and we were unable to recover it. 00:34:21.708 [2024-10-28 15:30:08.330883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.708 [2024-10-28 15:30:08.330908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.708 qpair failed and we were unable to recover it. 00:34:21.708 [2024-10-28 15:30:08.331154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.708 [2024-10-28 15:30:08.331226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.708 qpair failed and we were unable to recover it. 00:34:21.708 [2024-10-28 15:30:08.331481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.708 [2024-10-28 15:30:08.331546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.708 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.331798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.331823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.331926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.331951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.332105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.332128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.332303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.332376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.332613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.332717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.332927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.332955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.333096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.333120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.333448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.333513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.333744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.333769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.333863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.333887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.334078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.334152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.334400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.334424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.334617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.334725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.334947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.335012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.335262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.335285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.335494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.335558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.335794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.335861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.336102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.336126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.336305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.336348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.336548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.336614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.336824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.336849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.336942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.336966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.337175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.337247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.337441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.337465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.337607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.337642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.337891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.337957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.338201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.338225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.338402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.338467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.338631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.338728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.338957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.339005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.339240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.339307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.339702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.339769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.339974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.340012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.340218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.340282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.340550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.340615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.709 [2024-10-28 15:30:08.340806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.709 [2024-10-28 15:30:08.340830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.709 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.340933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.340957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.341137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.341202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.341460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.341484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.341663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.341714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.341813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.341838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.341960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.341983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.342124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.342187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.342400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.342465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.342706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.342731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.342828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.342857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.343147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.343213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.343418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.343442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.343543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.343567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.343739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.343805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.344027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.344051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.344195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.344233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.344432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.344497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.344716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.344741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.344862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.344887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.345152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.345218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.345463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.345495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.345665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.345732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.345947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.346012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.346233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.346258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.346410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.346472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.346644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.346749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.347003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.347027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.347211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.347275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.347547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.347612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.347811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.347836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.347929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.347953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.348079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.710 [2024-10-28 15:30:08.348144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.710 qpair failed and we were unable to recover it. 00:34:21.710 [2024-10-28 15:30:08.348458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.348483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.348704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.348729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.348824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.348848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.348977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.349001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.349204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.349268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.349560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.349625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.349847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.349871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.350017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.350070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.350288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.350353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.350587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.350611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.350753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.350778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.350877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.350901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.351040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.351064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.351227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.351307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.351565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.351631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.351900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.351924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.352018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.352042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.352285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.352361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.352675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.352700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.352818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.352894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.353152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.353217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.353462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.353486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.353603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.353669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.353880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.353947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.354159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.354183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.354344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.354381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.354560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.354624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.354832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.354857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.355002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.355027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.355221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.355285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.355514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.355538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.355706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.355774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.356029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.356094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.356402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.356425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.356573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.356641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.356892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.356957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.357237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.357260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.711 [2024-10-28 15:30:08.357441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.711 [2024-10-28 15:30:08.357506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.711 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.357724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.357749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.357840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.357864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.358015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.358039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.358239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.358304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.358547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.358570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.358735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.358804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.359028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.359095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.359326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.359350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.359489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.359568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.359820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.359888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.360122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.360146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.360348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.360411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.360725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.360792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.361100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.361123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.361280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.361344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.361583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.361663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.361868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.361893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.362041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.362095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.362333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.362399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.362595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.362622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.362732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.362758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.362851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.362875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.363020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.363043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.363206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.363270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.363510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.363575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.363796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.363821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.363925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.363949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.364112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.364177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.364421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.364455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.364609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.364700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.364937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.365003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.365316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.365340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.365593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.365673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.365949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.366014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.366187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.366216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.366327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.366351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.712 [2024-10-28 15:30:08.366484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.712 [2024-10-28 15:30:08.366549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.712 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.366747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.366772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.366855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.366879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.367056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.367121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.367404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.367427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.367621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.367736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.367944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.368010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.368241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.368265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.368401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.368474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.368710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.368777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.369044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.369068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.369221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.369260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.369568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.369634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.369857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.369881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.370023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.370078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.370328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.370393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.370638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.370737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.370869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.370893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.371062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.371127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.371374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.371398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.371591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.371690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.371901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.371965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.372245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.372268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.372439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.372514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.372802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.372868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.373166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.373189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.373376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.373440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.373678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.373733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.373839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.373863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.374029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.374067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.374207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.374278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.374514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.374537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.374732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.374802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.375069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.375134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.375395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.375418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.375577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.375642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.375851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.375915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.376221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.713 [2024-10-28 15:30:08.376245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.713 qpair failed and we were unable to recover it. 00:34:21.713 [2024-10-28 15:30:08.376408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.376482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.376760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.376827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.377099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.377132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.377301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.377378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.377625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.377704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.377950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.377988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.378262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.378326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.378628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.378724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.378975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.379012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.379274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.379338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.379611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.379714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.379980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.380004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.380167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.380233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.380487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.380551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.380775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.380799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.380946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.380986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.381162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.381227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.381570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.381634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.381826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.381850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.382064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.382128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.382348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.382371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.382567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.382631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.382857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.382923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.383129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.383152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.383313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.383351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.383550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.383625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.383860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.383885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.384033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.384090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.384323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.384388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.384638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.384728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.384827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.384851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.385018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.385083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.385317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.385341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.385514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.385578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.385775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.385801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.385946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.385984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.714 qpair failed and we were unable to recover it. 00:34:21.714 [2024-10-28 15:30:08.386081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.714 [2024-10-28 15:30:08.386133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.386384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.386450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.386680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.386708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.386872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.386937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.387172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.387237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.387477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.387501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.387681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.387747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.388009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.388074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.388281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.388304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.388443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.388467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.388740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.388806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.389024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.389048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.389159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.389183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.389383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.389455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.389718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.389743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.389901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.389965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.390266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.390332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.390606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.390629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.390751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.390777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.390881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.390906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.391075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.391099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.391292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.391356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.391606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.391705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.391898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.391922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.392050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.392074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.392263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.392329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.392535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.392558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.392761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.392827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.393089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.393154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.393396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.393423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.393535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.393586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.393818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.393884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.394078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.394102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.394237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.394261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.394471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.715 [2024-10-28 15:30:08.394536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.715 qpair failed and we were unable to recover it. 00:34:21.715 [2024-10-28 15:30:08.394746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.394774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.394924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.394949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.395182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.395247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.395483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.395506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.395643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.395739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.395996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.396061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.396281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.396306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.396461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.396486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.396720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.396787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.397033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.397056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.397164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.397218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.397463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.397529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.397715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.397741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.397874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.397900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.398062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.398128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.398359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.398383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.398553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.398579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.398737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.398766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.398937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.398964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.399122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.399187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.399442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.399509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.399748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.399775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.399895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.399935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.400092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.400157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.400354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.400379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.400546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.400607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.400844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.400909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.401101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.401125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.401269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.401294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.401485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.401550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.401750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.401792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.401945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.401971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.402184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.402250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.402445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.402470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.402593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.402619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.402810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.402879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.716 [2024-10-28 15:30:08.403111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.716 [2024-10-28 15:30:08.403136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.716 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.403345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.403411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.403631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.403736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.403974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.404000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.404108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.404133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.404359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.404424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.404598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.404705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.404808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.404863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.405008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.405073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.405338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.405362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.405504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.405569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.405809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.405834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.405982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.406007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.406169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.406252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.406422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.406488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.406711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.406752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.406862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.406917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.407162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.407227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.407419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.407459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.407596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.407622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.407863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.407929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.408162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.408187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.408302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.408328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.408560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.408626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.408844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.408869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.409007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.409036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.409284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.409351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.409546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.409572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.409691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.409723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.409886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.409963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.410220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.410244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.410433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.410498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.410713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.410781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.411013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.411052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.411194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.411219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.411458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.411523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.411748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.411774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.411941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.412005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.412274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.412340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.717 qpair failed and we were unable to recover it. 00:34:21.717 [2024-10-28 15:30:08.412591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.717 [2024-10-28 15:30:08.412674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.412808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.412833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.412978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.413043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.413256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.413296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.413501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.413566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.413833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.413902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.414123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.414148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.414299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.414374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.414608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.414689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.414876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.414902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.415026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.415052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.415177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.415243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.415454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.415478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.415677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.415743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.416005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.416071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.416288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.416311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.416486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.416550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.416798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.416868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.417131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.417155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.417322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.417387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.417632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.417729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.417952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.417978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.418109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.418161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.418371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.418437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.418664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.418691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.418857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.418922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.419133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.419209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.419383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.419459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.419692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.419738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.419864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.419903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.420090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.420115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.420290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.420355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.420545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.420610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.420861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.420889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.421024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.421066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.421197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.421263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.421500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.718 [2024-10-28 15:30:08.421524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.718 qpair failed and we were unable to recover it. 00:34:21.718 [2024-10-28 15:30:08.421676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.421740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.421983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.422049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.422295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.422321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.422488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.422553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.422824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.422891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.423085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.423122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.423267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.423293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.423526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.423591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.423791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.423828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.423968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.423993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.424238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.424304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.424543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.424568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.424748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.424818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.425062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.425128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.425335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.425360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.425496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.425522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.425681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.425748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.425958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.425982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.426110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.426136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.426397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.426463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.426698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.426724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.426805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.426845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.426992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.427058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.427269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.427295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.427414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.427439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.427666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.427734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.427952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.427977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.428103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.428129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.428387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.428452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.428682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.428714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.428854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.428881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.429030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.429095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.429329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.429353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.429491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.429576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.429843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.719 [2024-10-28 15:30:08.429867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.719 qpair failed and we were unable to recover it. 00:34:21.719 [2024-10-28 15:30:08.430000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.720 [2024-10-28 15:30:08.430040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.720 qpair failed and we were unable to recover it. 00:34:21.720 [2024-10-28 15:30:08.430199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.720 [2024-10-28 15:30:08.430264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.720 qpair failed and we were unable to recover it. 00:34:21.720 [2024-10-28 15:30:08.430505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.720 [2024-10-28 15:30:08.430570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.720 qpair failed and we were unable to recover it. 00:34:21.720 [2024-10-28 15:30:08.430810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.720 [2024-10-28 15:30:08.430836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.720 qpair failed and we were unable to recover it. 00:34:21.720 [2024-10-28 15:30:08.430968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.720 [2024-10-28 15:30:08.431046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.720 qpair failed and we were unable to recover it. 00:34:21.720 [2024-10-28 15:30:08.431245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.720 [2024-10-28 15:30:08.431310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.720 qpair failed and we were unable to recover it. 00:34:21.720 [2024-10-28 15:30:08.431499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.720 [2024-10-28 15:30:08.431524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.720 qpair failed and we were unable to recover it. 00:34:21.720 [2024-10-28 15:30:08.431688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.720 [2024-10-28 15:30:08.431715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.720 qpair failed and we were unable to recover it. 00:34:21.720 [2024-10-28 15:30:08.431985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.720 [2024-10-28 15:30:08.432051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.720 qpair failed and we were unable to recover it. 00:34:21.720 [2024-10-28 15:30:08.432257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.720 [2024-10-28 15:30:08.432282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.720 qpair failed and we were unable to recover it. 00:34:21.720 [2024-10-28 15:30:08.432437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.720 [2024-10-28 15:30:08.432477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.720 qpair failed and we were unable to recover it. 00:34:21.720 [2024-10-28 15:30:08.432596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.720 [2024-10-28 15:30:08.432685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.720 qpair failed and we were unable to recover it. 00:34:21.720 [2024-10-28 15:30:08.432945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.720 [2024-10-28 15:30:08.432970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.720 qpair failed and we were unable to recover it. 00:34:21.720 [2024-10-28 15:30:08.433103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.720 [2024-10-28 15:30:08.433178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.720 qpair failed and we were unable to recover it. 00:34:21.720 [2024-10-28 15:30:08.433388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.720 [2024-10-28 15:30:08.433453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.720 qpair failed and we were unable to recover it. 00:34:21.720 [2024-10-28 15:30:08.433656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.720 [2024-10-28 15:30:08.433697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.720 qpair failed and we were unable to recover it. 00:34:21.720 [2024-10-28 15:30:08.433841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.720 [2024-10-28 15:30:08.433902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.720 qpair failed and we were unable to recover it. 00:34:21.720 [2024-10-28 15:30:08.434181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.720 [2024-10-28 15:30:08.434246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.720 qpair failed and we were unable to recover it. 00:34:21.720 [2024-10-28 15:30:08.434458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.720 [2024-10-28 15:30:08.434483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.720 qpair failed and we were unable to recover it. 00:34:21.720 [2024-10-28 15:30:08.434643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.720 [2024-10-28 15:30:08.434735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.720 qpair failed and we were unable to recover it. 00:34:21.720 [2024-10-28 15:30:08.434966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.720 [2024-10-28 15:30:08.435035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.720 qpair failed and we were unable to recover it. 00:34:21.720 [2024-10-28 15:30:08.435342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.720 [2024-10-28 15:30:08.435374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.720 qpair failed and we were unable to recover it. 00:34:21.720 [2024-10-28 15:30:08.435560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.720 [2024-10-28 15:30:08.435639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.720 qpair failed and we were unable to recover it. 00:34:21.720 [2024-10-28 15:30:08.435891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.720 [2024-10-28 15:30:08.435958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.720 qpair failed and we were unable to recover it. 00:34:21.720 [2024-10-28 15:30:08.436238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.720 [2024-10-28 15:30:08.436264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.720 qpair failed and we were unable to recover it. 00:34:21.720 [2024-10-28 15:30:08.436474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.720 [2024-10-28 15:30:08.436547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.720 qpair failed and we were unable to recover it. 00:34:21.720 [2024-10-28 15:30:08.436803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.720 [2024-10-28 15:30:08.436843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.720 qpair failed and we were unable to recover it. 00:34:21.720 [2024-10-28 15:30:08.436940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.720 [2024-10-28 15:30:08.436966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.720 qpair failed and we were unable to recover it. 00:34:21.720 [2024-10-28 15:30:08.437151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.720 [2024-10-28 15:30:08.437210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.720 qpair failed and we were unable to recover it. 00:34:21.720 [2024-10-28 15:30:08.437511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.720 [2024-10-28 15:30:08.437585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.720 qpair failed and we were unable to recover it. 00:34:21.720 [2024-10-28 15:30:08.437812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.720 [2024-10-28 15:30:08.437842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.720 qpair failed and we were unable to recover it. 00:34:21.720 [2024-10-28 15:30:08.438005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.438058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.438328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.438395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.438720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.438747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.438929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.439011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.439310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.439388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.439672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.439714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.439879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.439952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.440237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.440304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.440643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.440692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.440820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.440895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.441200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.441266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.441486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.441511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.441749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.441817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.442151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.442218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.442510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.442541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.442743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.442811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.443148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.443215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.443516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.443548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.443703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.443772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.444021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.444087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.444410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.444435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.444630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.444715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.444863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.444890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.445041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.445066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.445272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.445344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.445615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.445732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.446057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.446083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.446286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.446362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.446666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.446739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.447047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.447089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.447227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.447252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.447576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.447648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.447911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.447935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.448098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.448163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.448425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.448491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.448730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.448756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.448876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.448920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.449092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.721 [2024-10-28 15:30:08.449170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.721 qpair failed and we were unable to recover it. 00:34:21.721 [2024-10-28 15:30:08.449443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.449468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.449682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.449753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.450071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.450149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.450388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.450430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.450605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.450691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.450964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.451055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.451292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.451335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.451578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.451644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.451935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.452002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.452240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.452279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.452466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.452538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.452858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.452890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.453049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.453074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.453197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.453221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.453438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.453504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.453765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.453792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.453921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.453961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.454181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.454249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.454498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.454526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.454702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.454772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.455031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.455098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.455421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.455447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.455628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.455714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.455992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.456069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.456377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.456401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.456593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.456678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.456908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.456976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.457229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.457255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.457379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.457406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.457682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.457753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.457964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.457990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.458105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.458132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.458383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.458450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.458694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.458720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.458896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.458962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.722 [2024-10-28 15:30:08.459260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.722 [2024-10-28 15:30:08.459333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.722 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.459634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.459732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.459867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.459892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.460189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.460267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.460577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.460647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.460887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.460916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.461139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.461206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.461498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.461563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.461782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.461817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.461936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.461961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.462147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.462181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.462361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.462400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.462580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.462648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.462952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.462993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.463132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.463173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.463387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.463455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.463731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.463757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.463893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.463918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.464191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.464259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.464507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.464532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.464777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.464844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.465114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.465206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.465436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.465482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.465687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.465755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.466060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.466150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.466423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.466448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.466648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.466752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.467033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.467103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.467380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.467405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.467610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.467704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.468010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.468078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.468330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.468372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.468603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.468690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.468988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.723 [2024-10-28 15:30:08.469056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.723 qpair failed and we were unable to recover it. 00:34:21.723 [2024-10-28 15:30:08.469325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.469351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.469543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.469619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.469866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.469893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.470099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.470124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.470276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.470354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.470609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.470713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.470927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.470963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.471146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.471176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.471404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.471472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.471736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.471763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.471929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.471996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.472292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.472366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.472673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.472717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.472847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.472916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.473210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.473277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.473528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.473553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.473712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.473742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.474028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.474105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.474310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.474335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.474484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.474511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.474802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.474872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.475161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.475185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.475359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.475431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.475726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.475805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.476072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.476098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.476254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.476320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.476535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.476602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.476863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.476891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.477074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.477123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.477351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.477419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.477696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.477740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.477861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.477907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.478159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.478225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.478485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.478555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.478792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.478820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.478976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.479020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.479230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.479254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.724 qpair failed and we were unable to recover it. 00:34:21.724 [2024-10-28 15:30:08.479371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.724 [2024-10-28 15:30:08.479396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.725 qpair failed and we were unable to recover it. 00:34:21.725 [2024-10-28 15:30:08.479647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.725 [2024-10-28 15:30:08.479737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.725 qpair failed and we were unable to recover it. 00:34:21.725 [2024-10-28 15:30:08.479939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.725 [2024-10-28 15:30:08.479965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.725 qpair failed and we were unable to recover it. 00:34:21.725 [2024-10-28 15:30:08.480121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.725 [2024-10-28 15:30:08.480148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.725 qpair failed and we were unable to recover it. 00:34:21.725 [2024-10-28 15:30:08.480418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.725 [2024-10-28 15:30:08.480495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.725 qpair failed and we were unable to recover it. 00:34:21.725 [2024-10-28 15:30:08.480784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.725 [2024-10-28 15:30:08.480811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.725 qpair failed and we were unable to recover it. 00:34:21.725 [2024-10-28 15:30:08.480982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.725 [2024-10-28 15:30:08.481055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.725 qpair failed and we were unable to recover it. 00:34:21.725 [2024-10-28 15:30:08.481346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.725 [2024-10-28 15:30:08.481413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.725 qpair failed and we were unable to recover it. 00:34:21.725 [2024-10-28 15:30:08.481665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.725 [2024-10-28 15:30:08.481691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.725 qpair failed and we were unable to recover it. 00:34:21.725 [2024-10-28 15:30:08.481908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.725 [2024-10-28 15:30:08.481975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.725 qpair failed and we were unable to recover it. 00:34:21.725 [2024-10-28 15:30:08.482221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.725 [2024-10-28 15:30:08.482289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.725 qpair failed and we were unable to recover it. 00:34:21.725 [2024-10-28 15:30:08.482574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.725 [2024-10-28 15:30:08.482598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.725 qpair failed and we were unable to recover it. 00:34:21.725 [2024-10-28 15:30:08.482768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.725 [2024-10-28 15:30:08.482809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.725 qpair failed and we were unable to recover it. 00:34:21.725 [2024-10-28 15:30:08.482902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.725 [2024-10-28 15:30:08.482978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.725 qpair failed and we were unable to recover it. 00:34:21.725 [2024-10-28 15:30:08.483245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.725 [2024-10-28 15:30:08.483271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.725 qpair failed and we were unable to recover it. 00:34:21.725 [2024-10-28 15:30:08.483512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.725 [2024-10-28 15:30:08.483588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.725 qpair failed and we were unable to recover it. 00:34:21.725 [2024-10-28 15:30:08.483864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.725 [2024-10-28 15:30:08.483943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.725 qpair failed and we were unable to recover it. 00:34:21.725 [2024-10-28 15:30:08.484152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.725 [2024-10-28 15:30:08.484176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.725 qpair failed and we were unable to recover it. 00:34:21.725 [2024-10-28 15:30:08.484349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.725 [2024-10-28 15:30:08.484374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.725 qpair failed and we were unable to recover it. 00:34:21.725 [2024-10-28 15:30:08.484594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.725 [2024-10-28 15:30:08.484696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.725 qpair failed and we were unable to recover it. 00:34:21.725 [2024-10-28 15:30:08.484914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.725 [2024-10-28 15:30:08.484940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.725 qpair failed and we were unable to recover it. 00:34:21.725 [2024-10-28 15:30:08.485064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.725 [2024-10-28 15:30:08.485107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.725 qpair failed and we were unable to recover it. 00:34:21.725 [2024-10-28 15:30:08.485300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.725 [2024-10-28 15:30:08.485375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.725 qpair failed and we were unable to recover it. 00:34:21.725 [2024-10-28 15:30:08.485587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.725 [2024-10-28 15:30:08.485627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.725 qpair failed and we were unable to recover it. 00:34:21.725 [2024-10-28 15:30:08.485761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.725 [2024-10-28 15:30:08.485788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.725 qpair failed and we were unable to recover it. 00:34:21.725 [2024-10-28 15:30:08.485881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.725 [2024-10-28 15:30:08.485909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.725 qpair failed and we were unable to recover it. 00:34:21.725 [2024-10-28 15:30:08.486032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.725 [2024-10-28 15:30:08.486072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.725 qpair failed and we were unable to recover it. 00:34:21.725 [2024-10-28 15:30:08.486183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.725 [2024-10-28 15:30:08.486239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.725 qpair failed and we were unable to recover it. 00:34:21.725 [2024-10-28 15:30:08.486490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.725 [2024-10-28 15:30:08.486568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.725 qpair failed and we were unable to recover it. 00:34:21.725 [2024-10-28 15:30:08.486792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.725 [2024-10-28 15:30:08.486834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.725 qpair failed and we were unable to recover it. 00:34:21.725 [2024-10-28 15:30:08.487013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.725 [2024-10-28 15:30:08.487080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.725 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.487348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.487414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.487694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.487721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.487922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.487996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.488252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.488318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.488567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.488593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.488735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.488763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.488935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.489011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.489306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.489331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.489553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.489621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.489902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.489983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.490218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.490249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.490404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.490497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.490788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.490857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.491196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.491221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.491381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.491457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.491685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.491754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.491991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.492033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.492163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.492207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.492392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.492470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.492700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.492727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.492869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.492940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.493235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.493313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.493568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.493593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.493790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.493816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.493976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.494043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.494320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.494350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.494506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.494600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.494960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.495054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.495349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.495397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.495607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.495691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.495903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.495969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.496205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.496247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.496399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.496466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.496712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.496786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.497044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.497071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.497235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.726 [2024-10-28 15:30:08.497302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.726 qpair failed and we were unable to recover it. 00:34:21.726 [2024-10-28 15:30:08.497517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.497585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.497850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.497878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.498016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.498082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.498289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.498354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.498551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.498578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.498736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.498787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.499043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.499109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.499344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.499370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.499480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.499524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.499749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.499820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.500082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.500113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.500265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.500337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.500565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.500632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.500900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.500928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.501114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.501187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.501419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.501486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.501684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.501712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.501837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.501863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.502007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.502092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.502346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.502372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.502532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.502601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.502819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.502846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.502968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.503011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.503209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.503274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.503573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.503640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.503926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.503966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.504127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.504200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.504456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.504522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.504780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.504807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.504988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.505056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.505323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.505398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.505627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.505675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.505839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.505886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.506076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.506144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.506440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.506469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.506589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.506613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.506850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.506926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.507137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.507161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.507322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.507388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.507598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.727 [2024-10-28 15:30:08.507693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.727 qpair failed and we were unable to recover it. 00:34:21.727 [2024-10-28 15:30:08.507931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.507958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.508089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.508148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.508361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.508428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.508639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.508693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.508872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.508949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.509157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.509223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.509482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.509548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.509832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.509859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.509979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.510025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.510275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.510304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.510443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.510498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.510750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.510819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.511083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.511125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.511258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.511282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.511532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.511607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.511890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.511916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.512055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.512122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.512376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.512453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.512707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.512733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.512922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.513020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.513247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.513316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.513541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.513565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.513755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.513828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.514028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.514094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.514329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.514354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.514514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.514579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.514822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.514888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.515087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.515112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.515243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.515284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.515427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.515491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.515690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.515716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.515834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.515860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.516019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.516084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.516299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.516324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.516457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.516505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.516733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.516760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.516912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.728 [2024-10-28 15:30:08.516943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.728 qpair failed and we were unable to recover it. 00:34:21.728 [2024-10-28 15:30:08.517153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.729 [2024-10-28 15:30:08.517220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.729 qpair failed and we were unable to recover it. 00:34:21.729 [2024-10-28 15:30:08.517499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.729 [2024-10-28 15:30:08.517565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.729 qpair failed and we were unable to recover it. 00:34:21.729 [2024-10-28 15:30:08.517844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.729 [2024-10-28 15:30:08.517871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.729 qpair failed and we were unable to recover it. 00:34:21.729 [2024-10-28 15:30:08.518075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.729 [2024-10-28 15:30:08.518144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.729 qpair failed and we were unable to recover it. 00:34:21.729 [2024-10-28 15:30:08.518373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.729 [2024-10-28 15:30:08.518438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.729 qpair failed and we were unable to recover it. 00:34:21.729 [2024-10-28 15:30:08.518663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.729 [2024-10-28 15:30:08.518690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.729 qpair failed and we were unable to recover it. 00:34:21.729 [2024-10-28 15:30:08.518835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.729 [2024-10-28 15:30:08.518884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.729 qpair failed and we were unable to recover it. 00:34:21.729 [2024-10-28 15:30:08.519086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.729 [2024-10-28 15:30:08.519149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.729 qpair failed and we were unable to recover it. 00:34:21.729 [2024-10-28 15:30:08.519358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.729 [2024-10-28 15:30:08.519383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.729 qpair failed and we were unable to recover it. 00:34:21.729 [2024-10-28 15:30:08.519499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.729 [2024-10-28 15:30:08.519528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.729 qpair failed and we were unable to recover it. 00:34:21.729 [2024-10-28 15:30:08.519756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.729 [2024-10-28 15:30:08.519824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.729 qpair failed and we were unable to recover it. 00:34:21.729 [2024-10-28 15:30:08.520039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.729 [2024-10-28 15:30:08.520079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.729 qpair failed and we were unable to recover it. 00:34:21.729 [2024-10-28 15:30:08.520239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.729 [2024-10-28 15:30:08.520293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.729 qpair failed and we were unable to recover it. 00:34:21.729 [2024-10-28 15:30:08.520504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.729 [2024-10-28 15:30:08.520570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.729 qpair failed and we were unable to recover it. 00:34:21.729 [2024-10-28 15:30:08.520882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.729 [2024-10-28 15:30:08.520910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.729 qpair failed and we were unable to recover it. 00:34:21.729 [2024-10-28 15:30:08.521122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.729 [2024-10-28 15:30:08.521187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.729 qpair failed and we were unable to recover it. 00:34:21.729 [2024-10-28 15:30:08.521399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.729 [2024-10-28 15:30:08.521467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.729 qpair failed and we were unable to recover it. 00:34:21.729 [2024-10-28 15:30:08.521760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.729 [2024-10-28 15:30:08.521787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.729 qpair failed and we were unable to recover it. 00:34:21.729 [2024-10-28 15:30:08.521955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.729 [2024-10-28 15:30:08.522052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.729 qpair failed and we were unable to recover it. 00:34:21.729 [2024-10-28 15:30:08.522253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.729 [2024-10-28 15:30:08.522320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.729 qpair failed and we were unable to recover it. 00:34:21.729 [2024-10-28 15:30:08.522532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.729 [2024-10-28 15:30:08.522558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.729 qpair failed and we were unable to recover it. 00:34:21.729 [2024-10-28 15:30:08.522706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.729 [2024-10-28 15:30:08.522777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.729 qpair failed and we were unable to recover it. 00:34:21.729 [2024-10-28 15:30:08.522981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.729 [2024-10-28 15:30:08.523045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.729 qpair failed and we were unable to recover it. 00:34:21.729 [2024-10-28 15:30:08.523225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.729 [2024-10-28 15:30:08.523266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.729 qpair failed and we were unable to recover it. 00:34:21.729 [2024-10-28 15:30:08.523417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.729 [2024-10-28 15:30:08.523443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.729 qpair failed and we were unable to recover it. 00:34:21.729 [2024-10-28 15:30:08.523593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.729 [2024-10-28 15:30:08.523669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.729 qpair failed and we were unable to recover it. 00:34:21.729 [2024-10-28 15:30:08.523846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.729 [2024-10-28 15:30:08.523885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.729 qpair failed and we were unable to recover it. 00:34:21.729 [2024-10-28 15:30:08.524037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.729 [2024-10-28 15:30:08.524101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.729 qpair failed and we were unable to recover it. 00:34:21.729 [2024-10-28 15:30:08.524295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.729 [2024-10-28 15:30:08.524357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.729 qpair failed and we were unable to recover it. 00:34:21.729 [2024-10-28 15:30:08.524561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.729 [2024-10-28 15:30:08.524586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.729 qpair failed and we were unable to recover it. 00:34:21.729 [2024-10-28 15:30:08.524723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.524773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.525029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.525096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.525355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.525381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.525554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.525620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.525898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.525969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.526278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.526305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.526510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.526579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.526818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.526859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.527019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.527043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.527169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.527193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.527383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.527447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.527695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.527722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.527868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.527933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.528158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.528222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.528426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.528450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.528594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.528622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.528902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.528976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.529219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.529261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.529457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.529534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.529755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.529810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.529939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.529980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.530096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.530124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.530316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.530342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.530439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.530465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.530573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.530611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.530736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.530783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.530921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.530959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.531103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.531168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.531440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.531504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.531768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.531795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.531927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.531956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.532082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.532111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.532308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.532334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.532487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.532564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.532853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.532880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.533028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.533063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.533268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.533332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.533603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.533692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.533899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.533926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:21.730 [2024-10-28 15:30:08.534047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.730 [2024-10-28 15:30:08.534073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:21.730 qpair failed and we were unable to recover it. 00:34:22.005 [2024-10-28 15:30:08.534171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.005 [2024-10-28 15:30:08.534197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.005 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.534324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.534350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.534514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.534558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.534831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.534871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.535026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.535066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.535166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.535195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.535319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.535383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.535578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.535630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.535812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.535840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.535994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.536059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.536306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.536369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.536639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.536670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.536830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.536856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.536991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.537026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.537144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.537170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.537396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.537460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.537717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.537744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.537840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.537866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.538039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.538113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.538381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.538458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.538761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.538789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.538916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.538990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.539215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.539281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.539603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.539686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.539839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.539866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.540007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.540072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.540253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.540314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.540541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.540605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.540778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.540804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.540911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.540937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.541137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.541206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.541465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.541533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.541814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.541841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.542048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.542145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.542383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.542449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.542687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.542715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.542845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.542871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.543032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.543096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.543347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.543411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.543597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.543686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-10-28 15:30:08.543863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-10-28 15:30:08.543890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.544914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.544946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.545222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.545290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.545536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.545601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.545791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.545818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.546020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.546086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.546290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.546355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.546633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.546726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.546837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.546864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.547468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.547545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.547767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.547795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.547957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.548007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.548272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.548336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.548613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.548698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.548887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.548913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.549084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.549137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.549337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.549402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.549635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.549726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.549826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.549852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.549953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.549979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.550134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.550196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.550456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.550520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.550768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.550794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.550938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.550973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.551164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.551228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.551496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.551561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.551801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.551827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.551932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.551982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.552182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.552207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.552363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.552442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.552729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.552755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.552884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.552908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.553085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.553147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.553372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.553446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.553704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.553729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.553872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.553916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.554167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.554230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.554509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.554532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.554709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.554744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.554860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-10-28 15:30:08.554895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-10-28 15:30:08.555110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.555149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.555307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.555370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.555628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.555717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.555930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.555968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.556141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.556206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.556393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.556458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.556622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.556668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.556815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.556841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.557013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.557079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.557267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.557291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.557416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.557442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.557589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.557666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.557861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.557888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.558069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.558149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.558380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.558444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.558733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.558761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.558867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.558893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.559021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.559086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.559287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.559326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.559458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.559505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.559727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.559794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.560037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.560062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.560176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.560200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.560356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.560421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.560596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.560687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.560826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.560852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.560975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.561038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.561280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.561304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.561421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.561485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.561694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.561761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.561986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.562026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.562130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.562155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.562385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.562449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.562614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.562642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.562762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.562789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.562904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.562968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.563182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.563207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.563309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.563335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.563465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.563530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.563732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.563760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.563858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.563884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.564076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.564102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.564297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.564322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.564521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.564584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.564776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.564803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.564896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.564922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.565065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.565091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.565248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.565312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.565550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-10-28 15:30:08.565574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-10-28 15:30:08.565715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.565742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.565838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.565864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.565956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.565982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.566101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.566142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.566299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.566364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.566539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.566579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.566725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.566752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.566845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.566872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.567004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.567029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.567146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.567201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.567394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.567459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.567709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.567735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.567837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.567863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.567983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.568059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.568298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.568322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.568462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.568516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.568707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.568774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.568979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.569004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.569170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.569235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.569450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.569515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.569734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.569761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.569867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.569893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.570118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.570183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.570436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.570475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.570693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.570770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.570973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.571039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.571278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.571303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.571469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.571510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.571646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.571730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.571892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.571918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.572041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.572066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.572250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.572315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.572534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.572599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.572775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.572801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.572930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.572956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.573179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.573205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.573388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.573453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.573719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.573787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.574008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.574053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-10-28 15:30:08.574201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-10-28 15:30:08.574226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.574450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.574521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.574713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.574740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.574843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.574870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.575032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.575097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.575294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.575319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.575460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.575487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.575637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.575714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.575880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.575906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.576009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.576049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.576188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.576253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.576472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.576504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.576720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.576786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.576993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.577058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.577334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.577359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.577533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.577599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.577822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.577888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.578128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.578154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.578312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.578378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.578584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.578667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.578827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.578853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.579026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.579052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.579238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.579304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.579533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.579573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.579677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.579704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.579821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.579852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.579992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.580017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.580201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.580269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.580455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.580518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.580736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.580807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.580932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.580975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.581141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.581167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.581347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.581412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.581648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.581680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-10-28 15:30:08.581807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-10-28 15:30:08.581833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.582008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.582074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.582365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.582391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.582539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.582603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.582816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.582882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.583163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.583188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.583381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.583446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.583722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.583790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.584036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.584062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.584278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.584343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.584683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.584750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.584975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.585001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.585102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.585128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.585331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.585396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.585597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.585624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.585749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.585775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.585993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.586057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.586293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.586319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.586491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.586556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.586775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.586841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.587095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.587120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.587286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.587350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.587558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.587623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.587805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.587832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.587988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.588053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.588361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.588426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.588664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.588717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.588817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.588886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.589114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.589179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.589379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.589404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.589600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.589681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.589887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.589964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.590237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.590261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.590442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.590507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.590724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.590792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.591017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.591057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.591168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.591220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.591455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.591523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.591719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.591746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.591849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.591875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-10-28 15:30:08.592049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-10-28 15:30:08.592115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.592330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.592356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.592513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.592602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.592803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.592869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.593112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.593136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.593355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.593419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.593620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.593706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.593892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.593919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.594049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.594074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.594220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.594291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.594517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.594589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.594779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.594805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.594906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.594933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.595144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.595168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.595319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.595394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.595683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.595762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.595978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.596003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.596146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.596172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.596415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.596481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.596722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.596748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.596854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.596880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.597092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.597157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.597465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.597489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.597727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.597794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.598042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.598108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.598416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.598440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.598672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.598738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.599003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.599068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.599314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.599339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.599512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.599538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.599780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.599846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.600113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.600141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.600310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.600375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.600669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.600735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.600928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.600968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.601070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.601094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.601259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.601323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.601586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.601610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.601741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.601768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.601873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.601899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-10-28 15:30:08.602068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-10-28 15:30:08.602092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.602252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.602277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.602484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.602549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.602748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.602775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.602875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.602901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.603040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.603105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.603352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.603376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.603550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.603616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.603842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.603915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.604128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.604152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.604298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.604363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.604620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.604704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.604910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.604936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.605122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.605186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.605432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.605498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.605711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.605738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.605840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.605866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.606056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.606121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.606399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.606423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.606644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.606724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.606949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.607015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.607293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.607317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.607504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.607568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.607804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.607830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.608050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.608073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.608259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.608322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.608635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.608715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.608902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.608928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.609060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.609084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.609306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.609371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.609672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.609710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.609875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.609950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.610187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.610253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.610562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.610586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.610721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.610771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.610989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.611054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.611267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.611290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.611423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.611447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.611673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.611740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.612000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.612039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-10-28 15:30:08.612203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-10-28 15:30:08.612267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.612543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.612607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.612821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.612848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.613057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.613123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.613395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.613461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.613760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.613786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.613916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.613981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.614189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.614254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.614533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.614599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.614771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.614796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.614961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.614986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.615116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.615167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.615315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.615339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.615450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.615473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.615706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.615747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.615828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.615854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.615978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.616043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.616301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.616325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.616522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.616587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.616788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.616855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.617087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.617110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.617292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.617316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.617556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.617621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.617822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.617847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.617997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.618021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.618266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.618330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.618565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.618589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.618724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.618782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.618998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.619063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.619294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.619318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.619559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.619624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.619846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.619920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.620193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.620216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.620345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.620415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.620716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.620784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-10-28 15:30:08.620997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-10-28 15:30:08.621021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.621168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.621214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.621432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.621497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.621756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.621782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.621884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.621930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.622194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.622271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.622546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.622611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.622783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.622808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.622976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.623041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.623320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.623345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.623549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.623614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.623834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.623899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.624206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.624230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.624390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.624454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.624683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.624760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.625023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.625047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.625163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.625241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.625492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.625558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.625777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.625802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.625899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.625924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.626082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.626147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.626325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.626348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.626494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.626519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.626743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.626810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.627054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.627092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.627285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.627350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.627568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.627634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.627841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.627865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.627973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.627998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.628161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.628227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.628438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.628462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.628595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.628620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.628828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.628894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.629135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.629158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.629345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.629409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.629667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.629734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.629959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.630001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.630172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.630237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.630477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.630543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.630751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.630776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.630866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-10-28 15:30:08.630891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-10-28 15:30:08.631031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.631097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.631340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.631365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.631533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.631598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.631825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.631891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.632148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.632171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.632323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.632398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.632648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.632731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.632954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.632992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.633147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.633170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.633380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.633445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.633688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.633712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.633813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.633838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.634019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.634085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.634318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.634342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.634540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.634606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.634835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.634902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.635145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.635168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.635347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.635411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.635736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.635803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.636109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.636133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.636322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.636386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.636640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.636720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.636894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.636918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.637056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.637080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.637242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.637307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.637570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.637634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.637827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.637851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.637960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.638025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.638272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.638296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.638478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.638543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.638791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.638857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.639152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.639175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.639370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.639436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.639717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.639784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.640118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.640142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.640329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.640404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.640676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.640752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.640955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.640993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.641118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-10-28 15:30:08.641142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-10-28 15:30:08.641406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.641471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.641736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.641761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.641895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.641960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.642247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.642312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.642643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.642675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.642833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.642897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.643132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.643197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.643509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.643533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.643725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.643791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.644030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.644095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.644319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.644343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.644507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.644545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.644717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.644783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.645031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.645055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.645221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.645285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.645505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.645570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.645772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.645796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.645915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.645938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.646104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.646166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.646357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.646381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.646553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.646618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.646813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.646880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.647126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.647157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.647358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.647428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.647721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.647788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.648029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.648054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.648265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.648329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.648667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.648735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.648956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.648980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.649150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.649173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.649357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.649422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.649623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.649656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.649780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.649807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.649997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.650062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.650311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.650350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.650577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.650671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.650873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.650950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.651223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.651247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.651481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.651545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.651783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.651850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-10-28 15:30:08.652157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-10-28 15:30:08.652181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.652316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.652368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.652611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.652694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.652911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.652936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.653092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.653168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.653428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.653493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.653746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.653786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.653888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.653914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.654125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.654190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.654459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.654483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.654679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.654724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.654833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.654857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.654952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.654990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.655150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.655226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.655390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.655415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.655544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.655580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.655718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.655746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.655850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.655875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.656007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.656032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.656183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.656207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.656367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.656391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.656539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.656566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.656724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.656750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.656854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.656881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.657014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.657039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.657217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.657241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.657434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.657460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.657598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.657633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.657769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.657794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.657928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.657967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.658119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.658159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.658342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.658367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.658530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.658556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.658719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.658745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.658855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.658880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.659011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-10-28 15:30:08.659035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-10-28 15:30:08.659212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.659240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.659358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.659383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.659497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.659522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.659637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.659670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.659774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.659800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.659957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.659981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.660187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.660212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.660331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.660356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.660462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.660488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.660610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.660636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.660801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.660827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.661020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.661060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.661188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.661211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.661342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.661365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.661540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.661565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.661749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.661776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.661864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.661891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.662017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.662057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.662195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.662233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.662358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.662398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.662535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.662562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.662694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.662721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.662836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.662861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.663015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.663053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.663187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.663210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.663376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.663401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.663537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.663563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.663708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.663739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.663869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.663909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.664064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.664089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.664184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.664222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.664345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.664370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.664504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.664560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.664689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.664718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.664828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.664856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.665004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.665046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.665195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.665261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.665386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.665429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.665647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.665715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.665816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.665842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-10-28 15:30:08.665936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-10-28 15:30:08.665976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.666143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.666185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.666313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.666338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.666517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.666542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.666672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.666714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.666824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.666852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.666999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.667025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.667163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.667203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.667318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.667344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.667510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.667536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.667686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.667712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.667818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.667843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.667963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.667988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.668069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.668110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.668229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.668254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.668380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.668407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.668531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.668558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.668682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.668709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.668865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.668891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.669014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.669054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.669163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.669187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.669351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.669391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.669536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.669561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.669714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.669741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.669860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.669887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.669984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.670023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.670183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.670209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.670344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.670373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.670495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.670520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.670684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.670711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.670825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.670852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.671003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.671044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.671207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.671231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.671394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.671420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.671539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.671579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.671687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.671727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.671823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.671847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.671942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.671968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.672078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.672102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.672237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.672263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-10-28 15:30:08.672382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-10-28 15:30:08.672410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.672561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.672601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.672730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.672757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.672877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.672904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.673023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.673048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.673198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.673223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.673334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.673360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.673543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.673568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.673686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.673712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.673858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.673884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.673989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.674013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.674153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.674178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.674306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.674333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.674442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.674467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.674588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.674615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.674733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.674761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.674885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.674911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.675048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.675074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.675204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.675244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.675354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.675378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.675515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.675541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.675635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.675690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.675795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.675822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.675942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.675983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.676147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.676171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.676339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.676365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.676470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.676510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.676646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.676685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.676830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.676856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.677013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.677037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.677177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.677201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.677322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.677347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.677479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.677505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.677641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.677677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.677804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.677831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.677940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.677965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.678069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.678095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.678225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.678264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.678374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.678412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.678531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.678570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-10-28 15:30:08.678673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-10-28 15:30:08.678702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.678845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.678873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.678962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.678989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.679114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.679140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.679320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.679360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.679502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.679528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.679684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.679710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.679847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.679873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.679990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.680016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.680130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.680155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.680279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.680304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.680476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.680517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.680614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.680640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.680796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.680822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.680970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.680996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.681104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.681145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.681253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.681280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.681417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.681443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.681583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.681608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.681768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.681794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.681913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.681939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.682079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.682103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.682229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.682253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.682366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.682391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.682546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.682588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.682714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.682741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.682861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.682888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.683030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.683071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.683214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.683239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.683376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.683402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.683545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.683570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.683689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.683715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.683884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.683911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.684059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.684083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.684253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.684278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.684416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.684442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.684567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.684607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.684758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.684785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.684907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.684950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.685074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.685114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.685230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-10-28 15:30:08.685295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-10-28 15:30:08.685424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.685450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.685599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.685624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.685778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.685804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.685903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.685947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.686088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.686127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.686270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.686295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.686462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.686501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.686690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.686717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.686819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.686845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.686981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.687005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.687132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.687171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.687343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.687367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.687501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.687542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.687646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.687682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.687814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.687854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.687974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.688013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.688133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.688173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.688331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.688355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.688486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.688512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.688655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.688683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.688838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.688865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.689030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.689054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.689203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.689228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.689392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.689433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.689530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.689555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.689700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.689726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.689840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.689869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.689996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.690036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.690151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.690189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.690321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.690345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.690454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.690480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.690587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.690614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.690798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.690838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.690972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.690998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.691134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.691175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-10-28 15:30:08.691293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-10-28 15:30:08.691318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.691415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.691441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.691563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.691589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.691744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.691771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.691932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.691959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.692107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.692131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.692238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.692263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.692435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.692475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.692563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.692602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.692757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.692784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.692918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.692945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.693069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.693111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.693232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.693257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.693400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.693426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.693555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.693581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.693716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.693744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.693869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.693908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.694021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.694059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.694213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.694237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.694377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.694403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.694540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.694567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.694669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.694696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.694838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.694864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.695013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.695053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.695201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.695226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.695306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.695330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.695431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.695456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.695549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.695575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.695705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.695747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.695877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.695918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.696041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.696066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.696213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.696257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.696431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.696455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.696558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.696631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.696768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.696809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.696933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.696974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.697140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.697181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.697321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.697348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.697474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.697500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.697681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.697707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-10-28 15:30:08.697831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-10-28 15:30:08.697857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.697952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.697977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.698115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.698139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.698274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.698301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.698434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.698473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.698640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.698691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.698857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.698882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.698979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.699017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.699129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.699154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.699329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.699369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.699512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.699536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.699648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.699682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.699844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.699870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.700023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.700047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.700155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.700180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.700273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.700300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.700431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.700470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.700630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.700676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.700823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.700849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.701002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.701045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.701144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.701183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.701283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.701322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.701488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.701514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.701664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.701689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.701815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.701841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.701941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.701966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.702118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.702144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.702255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.702282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.702447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.702473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.702678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.702704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.702790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.702815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.702943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.702972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.703094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.703118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.703255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.703281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.703459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.703484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.703629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.703689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.703823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.703863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.704004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.704044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.704198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.704222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.704415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.704441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-10-28 15:30:08.704586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-10-28 15:30:08.704611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.704790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.704817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.704960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.704987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.705153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.705177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.705299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.705336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.705520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.705549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.705732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.705773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.705875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.705901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.706011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.706037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.706215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.706252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.706432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.706457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.706581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.706606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.706799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.706826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.706923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.706950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.707096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.707136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.707291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.707316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.707488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.707512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.707695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.707720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.707819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.707843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.707992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.708015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.708162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.708189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.708307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.708334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.708500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.708527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.708662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.708696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.708851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.708878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.709029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.709055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.709204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.709229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.709357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.709389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.709527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.709567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.709696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.709722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.709847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.709873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.710050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.710084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.710245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.710270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.710383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.710408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.710579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.710604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.710799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.710825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.710979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.711003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.711178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.711203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.711375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.711401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.711538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.711564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.711732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-10-28 15:30:08.711759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-10-28 15:30:08.711882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.711936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.712122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.712148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.712352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.712419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.712556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.712616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.712808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.712874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.713047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.713072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.713261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.713309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.713481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.713507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.713666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.713701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.713803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.713829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.713951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.713992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.714110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.714150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.714291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.714333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.714464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.714490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.714619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.714645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.714822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.714848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.714980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.715005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.715135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.715174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.715313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.715337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.715483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.715510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.715664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.715705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.715845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.715872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.716005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.716032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.716185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.716210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.716317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.716343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.716513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.716538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.716677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.716704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.716824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.716850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.716998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.717024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.717178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.717204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.717376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.717419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.717564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.717591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.717713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.717741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.717825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.717867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.718042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.718069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-10-28 15:30:08.718214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-10-28 15:30:08.718239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.718406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.718432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.718557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.718584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.718737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.718763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.718931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.718971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.719159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.719184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.719346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.719386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.719550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.719615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.719783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.719810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.719980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.720006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.720159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.720185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.720338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.720364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.720492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.720534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.720696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.720723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.720828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.720855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.720972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.720998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.721130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.721155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.721273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.721300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.721425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.721452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.721576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.721602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.721731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.721758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.721891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.721918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.722095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.722135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.722273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.722314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.722468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.722493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.722632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.722681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.722833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.722860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.723010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.723036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.723162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.723187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.723339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.723380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.723507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.723548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.723676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.723704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.723826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.723868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.723991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.724016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.724178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.724227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.724341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.724371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.724527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.724554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.724747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.724774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.724909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.724950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.725081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.725107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-10-28 15:30:08.725249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-10-28 15:30:08.725289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.725417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.725444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.725546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.725573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.725740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.725781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.725908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.725937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.726110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.726136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.726266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.726291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.726455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.726481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.726622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.726648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.726824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.726851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.726982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.727009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.727133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.727174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.727320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.727361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.727488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.727529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.727689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.727716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.727846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.727872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.727969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.727996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.728159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.728185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.728342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.728380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.728519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.728546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.728675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.728717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.728847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.728874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.729010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.729049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.729184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.729212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.729371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.729398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.729548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.729574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.729711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.729738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.729863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.729889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.730050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.730076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.730210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.730235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.730371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.730397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.730549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.730577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.730734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.730761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.730889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.730915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.731033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.731074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.731206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.731236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.731359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.731386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.731513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.731540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.731663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.731705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.731842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.731869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-10-28 15:30:08.731993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-10-28 15:30:08.732033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.732159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.732199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.732332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.732358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.732507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.732533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.732633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.732667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.732809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.732835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.732958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.732999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.733149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.733175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.733310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.733336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.733456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.733482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.733584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.733610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.733736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.733775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.733865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.733893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.734039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.734066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.734212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.734237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.734359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.734386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.734539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.734564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.734679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.734707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.734825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.734851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.734995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.735035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.735169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.735194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.735356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.735383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.735506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.735538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.735655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.735696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.735818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.735843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.735983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.736022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.736170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.736194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.736316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.736359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.736491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.736518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.736612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.736639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.736770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.736797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.736967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.736993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.737131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.737157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.737276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.737301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.737422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.737463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.737576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.737603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.737734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.737763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.737856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.737882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.738041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.738065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.738217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-10-28 15:30:08.738241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-10-28 15:30:08.738362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.738386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.738525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.738550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.738687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.738714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.738814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.738840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.738993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.739019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.739151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.739175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.739337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.739361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.739485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.739510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.739635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.739667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.739793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.739818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.739980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.740006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.740105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.740143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.740260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.740285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.740429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.740453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.740552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.740578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.740710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.740736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.740851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.740875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.741007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.741049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.741134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.741158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.741270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.741296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.741412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.741452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.741578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.741602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.741729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.741768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.741880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.741908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.742056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.742081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.742227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.742253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.742384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.742412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.742504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.742531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.742707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.742759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.742902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.742928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.743057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.743097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.743195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.743218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.743350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.743374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.743496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.743520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.743636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.743674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.743807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-10-28 15:30:08.743831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-10-28 15:30:08.743956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.743995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.744093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.744117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.744242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.744268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.744419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.744444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.744561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.744586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.744709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.744735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.744855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.744880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.745030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.745054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.745204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.745229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.745345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.745370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.745456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.745481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.745580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.745605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.745758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.745795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.745952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.746000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.746097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.746121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.746278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.746303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.746395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.746419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.746585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.746676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.746828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.746853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.746951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.746975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.747095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.747120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.747273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.747317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.747530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.747598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.747774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.747799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.747952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.747976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.748190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.748255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.748477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.748541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.748817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.748845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.748994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.749059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.749283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.749307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.749397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.749421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.749615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.749737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.749850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.749877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.750003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.750028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.750249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.750313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.750549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.750614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-10-28 15:30:08.750785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-10-28 15:30:08.750810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.750903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.750928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.751128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.751153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.751272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.751296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.751499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.751578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.751800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.751827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.751974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.751998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.752207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.752273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.752495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.752560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.752776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.752802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.752899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.752924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.753138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.753162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.753316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.753372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.753576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.753641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.753843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.753870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.753971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.753996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.754196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.754260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.754430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.754500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.754725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.754751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.754830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.754856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.754997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.755023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.755214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.755279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.755477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.755541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.755763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.755790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.755915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.755941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.756155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.756218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.756517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.756581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.756773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.756799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.756970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.757046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.757356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.757429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.757705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.757735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.757838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.757868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.758069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.758093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.758241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.758313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.758562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.758625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.758793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.758819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.758991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.759047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.759327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.759390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.759597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.759677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.759830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.759856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.033 [2024-10-28 15:30:08.759984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.033 [2024-10-28 15:30:08.760048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.033 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.760260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.760283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.760399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.760423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.760608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.760689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.760797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.760822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.760991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.761031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.761239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.761303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.761603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.761702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.761833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.761859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.762045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.762109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.762344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.762407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.762645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.762722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.762822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.762847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.763005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.763044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.763124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.763184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.763384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.763448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.763703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.763730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.763856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.763882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.764073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.764137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.764417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.764481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.764739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.764764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.764869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.764894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.765028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.765052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.765182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.765253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.765449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.765513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.765717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.765743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.765850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.765875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.766018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.766081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.766325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.766388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.766679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.766741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.766856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.766892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.767005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.767030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.767224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.767289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.767533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.767597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.767812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.767837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.767960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.767999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.768146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.768210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.768469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.768533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.768766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.768791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.768877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.768902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.034 [2024-10-28 15:30:08.769027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.034 [2024-10-28 15:30:08.769051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.034 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.769212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.769284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.769483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.769548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.769735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.769761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.769861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.769887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.769996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.770059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.770356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.770420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.770648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.770728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.770845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.770870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.770993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.771017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.771143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.771192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.771394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.771457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.771665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.771690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.771853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.771921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.772142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.772205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.772454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.772478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.772597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.772693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.772874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.772937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.773161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.773185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.773302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.773350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.773617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.773711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.773954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.773978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.774104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.774173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.774371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.774434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.774723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.774749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.774900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.774965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.775210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.775273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.775532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.775556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.775689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.775714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.775838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.775902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.776159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.776183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.776323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.776388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.776609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.776714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.776908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.776933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.777072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.777097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.777288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.777353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.777610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.777694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.777806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.777832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.777990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.778054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.778279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.778303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.778423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.778472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.778746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.778813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.779047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.779070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.035 qpair failed and we were unable to recover it. 00:34:22.035 [2024-10-28 15:30:08.779217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.035 [2024-10-28 15:30:08.779255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.779417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.779482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.779741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.779765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.779849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.779877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.780095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.780159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.780406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.780430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.780601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.780678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.780800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.780824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.780950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.780974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.781080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.781104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.781311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.781374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.781549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.781578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.781681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.781707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.781884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.781947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.782191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.782214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.782335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.782407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.782691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.782758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.783036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.783059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.783201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.783265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.783525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.783590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.783818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.783842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.783965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.783989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.784217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.784281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.784473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.784496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.784617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.784641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.784823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.784887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.785102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.785125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.785273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.785318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.785536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.785600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.785791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.785816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.785950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.785974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.786170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.786235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.786474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.786537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.786777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.786802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.786918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.786968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.787175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.787198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.787314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.787338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.787530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.787594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.787801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.036 [2024-10-28 15:30:08.787825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.036 qpair failed and we were unable to recover it. 00:34:22.036 [2024-10-28 15:30:08.787957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.787981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.788137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.788200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.788432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.788455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.788578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.788601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.788833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.788898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.789160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.789183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.789305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.789342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.789507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.789570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.789761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.789787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.789910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.789935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.790087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.790151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.790391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.790415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.790538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.790591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.790856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.790881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.791016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.791041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.791175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.791252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.791491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.791555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.791800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.791825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.791943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.791967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.792171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.792235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.792489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.792512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.792678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.792743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.792980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.793044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.793296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.793320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.793462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.793484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.793679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.793757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.793970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.793994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.794143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.794182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.794308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.794371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.794543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.794568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.794753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.794810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.795050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.795116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.795336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.795363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.795482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.795534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.795715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.795782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.796001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.796025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.796182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.796257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.796488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.796553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.796771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.796796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.796896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.796921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.797069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.797133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.797341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.797365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.797554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.797620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.797830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.797856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.797974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.797999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.037 [2024-10-28 15:30:08.798183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.037 [2024-10-28 15:30:08.798248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.037 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.798497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.798560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.798764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.798789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.798875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.798899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.799004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.799068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.799274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.799298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.799439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.799462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.799683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.799750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.799956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.799979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.800076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.800100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.800280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.800343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.800540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.800564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.800646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.800692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.800832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.800896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.801063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.801091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.801235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.801259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.801438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.801502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.801676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.801717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.801843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.801868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.801995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.802059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.802269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.802293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.802450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.802523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.802734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.802798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.803023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.803047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.803167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.803191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.803433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.803496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.803733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.803758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.803879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.803904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.804070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.804134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.804381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.804405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.804500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.804545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.804740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.804766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.804886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.804911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.805013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.805038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.805186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.805250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.805488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.805511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.805696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.805768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.805972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.806037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.806239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.806262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.806352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.806376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.806523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.806586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.806803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.806832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.807015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.807078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.807300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.807364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.807603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.807627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.807755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.807831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.808031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.808096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.808279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.808303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.038 qpair failed and we were unable to recover it. 00:34:22.038 [2024-10-28 15:30:08.808415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.038 [2024-10-28 15:30:08.808439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.808626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.808710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.808900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.808925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.809097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.809157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.809383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.809447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.809688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.809715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.809831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.809883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.810091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.810156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.810381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.810406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.810576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.810640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.810779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.810805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.810952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.810977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.811088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.811145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.811373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.811437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.811648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.811693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.811813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.811869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.812065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.812129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.812353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.812377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.812491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.812515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.812677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.812742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.812980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.813019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.813168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.813233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.813435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.813498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.813703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.813729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.813881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.813951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.814163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.814226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.814442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.814466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.814559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.814583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.814735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.814801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.815014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.815038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.815122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.815146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.815277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.815340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.815567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.815591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.815723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.815791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.816071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.816170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.816422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.816449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.816591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.816681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.816831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.816857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.817009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.817035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.817190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.817256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.817457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.817524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.817764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.817791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.039 qpair failed and we were unable to recover it. 00:34:22.039 [2024-10-28 15:30:08.817919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.039 [2024-10-28 15:30:08.817959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.818143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.818206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.818405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.818430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.818555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.818580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.818781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.818808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.818957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.818983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.819146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.819213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.819418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.819484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.819679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.819705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.819855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.819881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.820061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.820126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.820333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.820358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.820484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.820510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.820725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.820752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.820900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.820925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.821045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.821113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.821335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.821398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.821597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.821622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.821754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.821780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.821954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.822025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.822280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.822306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.822452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.822517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.822731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.822757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.822896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.822921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.823041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.823095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.823299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.823366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.823608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.823632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.823792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.823816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.823957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.824021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.824229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.824254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.824354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.824379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.824530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.824597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.824838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.824870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.825044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.825108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.825340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.825405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.825673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.825724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.825847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.825872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.825996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.826021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.826198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.826224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.826386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.826411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.826546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.826590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.826842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.826869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.827020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.827081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.827320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.827384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.827587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.827612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.040 qpair failed and we were unable to recover it. 00:34:22.040 [2024-10-28 15:30:08.827745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.040 [2024-10-28 15:30:08.827772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.827922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.827992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.828218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.828242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.828356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.828381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.828599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.828679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.828839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.828865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.829040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.829106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.829334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.829398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.829619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.829644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.829780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.829806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.829964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.830008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.830228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.830253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.830334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.830359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.830511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.830575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.830836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.830863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.830971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.830997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.831190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.831255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.831474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.831498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.831616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.831664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.831780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.831805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.831899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.831924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.832056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.832081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.832297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.832362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.832590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.832615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.832774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.832800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.832966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.833032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.833228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.833252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.833410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.833455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.833666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.833734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.833939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.833964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.834103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.834156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.834476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.834540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.834828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.834855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.835004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.835070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.835362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.835426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.835704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.835729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.835880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.835905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.836047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.836116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.836397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.836421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.836570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.836635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.836895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.836974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.837303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.837327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.837576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.837644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.837990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.838063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.838369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.838394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.838706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.838776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.839014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.041 [2024-10-28 15:30:08.839083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.041 qpair failed and we were unable to recover it. 00:34:22.041 [2024-10-28 15:30:08.839367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.839392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.839581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.839668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.840002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.840068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.840357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.840381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.840565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.840635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.840984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.841060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.841341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.841365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.841504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.841582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.841869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.841900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.842057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.842081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.842258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.842331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.842645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.842728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.843012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.843036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.843179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.843245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.843507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.843572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.843789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.843814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.843969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.844032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.844321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.844385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.844710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.844735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.844959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.845025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.845298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.845362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.845663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.845689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.845884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.845950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.846210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.846275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.846579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.846604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.846842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.846909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.847222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.847290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.847636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.847683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.847891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.847959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.848277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.848345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.848656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.848697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.848896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.848963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.849252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.849326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.849674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.849726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.849949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.849975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.850151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.850217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.850470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.850539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.850808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.850850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.851003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.851029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.851216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.851241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.851403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.851430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.851660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.851687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.851876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.851901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.852097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.852122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.852312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.852337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.852519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.852545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.852772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.042 [2024-10-28 15:30:08.852800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.042 qpair failed and we were unable to recover it. 00:34:22.042 [2024-10-28 15:30:08.852998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.043 [2024-10-28 15:30:08.853031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.043 qpair failed and we were unable to recover it. 00:34:22.043 [2024-10-28 15:30:08.853188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.043 [2024-10-28 15:30:08.853214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.043 qpair failed and we were unable to recover it. 00:34:22.043 [2024-10-28 15:30:08.853414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.043 [2024-10-28 15:30:08.853480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.043 qpair failed and we were unable to recover it. 00:34:22.043 [2024-10-28 15:30:08.853723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.043 [2024-10-28 15:30:08.853789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.043 qpair failed and we were unable to recover it. 00:34:22.043 [2024-10-28 15:30:08.854079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.043 [2024-10-28 15:30:08.854120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.043 qpair failed and we were unable to recover it. 00:34:22.043 [2024-10-28 15:30:08.854335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.043 [2024-10-28 15:30:08.854401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.043 qpair failed and we were unable to recover it. 00:34:22.043 [2024-10-28 15:30:08.854709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.043 [2024-10-28 15:30:08.854775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.043 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.855002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.855028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.855245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.855271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.855489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.855553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.855863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.855889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.856072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.856138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.856459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.856523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.856829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.856856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.857065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.857131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.857398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.857435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.857589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.857616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.857769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.857820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.858009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.858045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.858237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.858264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.858410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.858447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.858706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.858775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.859009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.859035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.859279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.859355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.859641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.859735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.860035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.860060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.860267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.860343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.860691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.860760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.861067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.861092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.861332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.861400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.861721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.861789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.862085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.862109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.862321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.862386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.862707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.862774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.863028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.863053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.863209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.863274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.863590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.863668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.863978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.864003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.864248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.864313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.864602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.864682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.864995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.865023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.865288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.865353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-10-28 15:30:08.865607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-10-28 15:30:08.865693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.865999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.866023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.866234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.866300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.866680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.866747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.867064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.867089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.867242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.867307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.867603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.867685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.867974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.867999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.868139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.868205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.868504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.868568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.868883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.868909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.869133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.869199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.869534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.869599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.869880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.869905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.870064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.870130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.870438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.870503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.870839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.870865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.871059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.871124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.871364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.871428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.871722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.871747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.871902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.871979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.872275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.872340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.872610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.872688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.872973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.873038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.873340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.873404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.873730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.873756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.874007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.874073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.874348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.874411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.874721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.874746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.874881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.874911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.875087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.875164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.875475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.875500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.875667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.875743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.876049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.876120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.876436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.876460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.876693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.876762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.877066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.877140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.877436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.877460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.877716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.877797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.878073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.878148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-10-28 15:30:08.878469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-10-28 15:30:08.878494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.878688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.878758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.879055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.879130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.879410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.879434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.879699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.879769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.880082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.880156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.880462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.880486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.880636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.880718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.880902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.880927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.881122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.881146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.881331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.881355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.881709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.881777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.882094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.882120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.882312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.882382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.882689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.882758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.883021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.883045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.883290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.883366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.883625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.883721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.884047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.884071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.884267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.884346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.884617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.884714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.885026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.885050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.885349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.885417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.885748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.885820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.886124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.886148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.886291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.886357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.886607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.886699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.886964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.886989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.887130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.887195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.887480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.887546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.887867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.887893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.888041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.888107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.888426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.888490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.888801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.888826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.889021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.889087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.889398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.889462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-10-28 15:30:08.889765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-10-28 15:30:08.889790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.889959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.890025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.890336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.890411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.890734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.890759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.890957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.890981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.891254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.891319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.891616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.891640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.891829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.891896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.892150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.892214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.892527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.892551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.892690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.892779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.893043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.893108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.893392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.893416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.893575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.893641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.893959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.894025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.894230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.894254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.894406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.894475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.894729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.894755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.894949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.894973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.895141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.895207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.895410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.895474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.895747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.895773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.896028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.896094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.896331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.896395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.896695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.896722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.896897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.896963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.897207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.897271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.897512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.897537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.897747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.897814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.898072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.898144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.898388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.898419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.898670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.898738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.899053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.899118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.899388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.899418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.899568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.899635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.899883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.899950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.900159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.900201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.900381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.900448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.900667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.900733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-10-28 15:30:08.901007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-10-28 15:30:08.901033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.901191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.901257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.901550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.901615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.901912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.901943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.902096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.902162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.902453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.902517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.902793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.902820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.902984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.903050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.903254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.903318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.903516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.903582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.903814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.903841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.903959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.903985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.904152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.904179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.904344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.904371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.904509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.904573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.904792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.904819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.904936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.904963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.905127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.905192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.905370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.905412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.905584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.905676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.905905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.905971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.906224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.906250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.906371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.906449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.906683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.906749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.906953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.906979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.907142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.907196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.907473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.907537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.907852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.907879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.908142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.908206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.908455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.908520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.908779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.908807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.908933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.908959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.909210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.909275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.909564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-10-28 15:30:08.909588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-10-28 15:30:08.909759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.909785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.910121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.910185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.910459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.910484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.910691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.910741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.910880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.910906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.911027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.911053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.911180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.911206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.911501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.911599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.911817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.911845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.911962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.911995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.912122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.912184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.912418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.912445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.912569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.912622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.912837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.912865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.912995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.913021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.913230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.913295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.913551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.913615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.913865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.913891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.914031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.914095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.914341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.914406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.914626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.914660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.914766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.914792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.914983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.915047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.915272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.915298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.915439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.915464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.915622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.915708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.915923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.915965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.916135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.916200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.916450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.916515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.916776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.916803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.916900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.916953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.917168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.917233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.917489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.917515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.917617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.917713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.917881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.917907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.918056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.918081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.918238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.918303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.918554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.918619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.918867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.918894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.918991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-10-28 15:30:08.919055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-10-28 15:30:08.919291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.919355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.919567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.919593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.919730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.919756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.919876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.919902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.920037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.920082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.920198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.920222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.920417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.920482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.920687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.920714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.920822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.920848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.921000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.921075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.921319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.921345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.921530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.921594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.921773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.921799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.921917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.921942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.922074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.922098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.922352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.922416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.922597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.922624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.922738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.922764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.922906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.922955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.923189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.923214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.923344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.923414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.923594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.923672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.923853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.923878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.924129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.924196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.924438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.924502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.924732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.924760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.924868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.924894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.925126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.925191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.925405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.925431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.925535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.925573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.925804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.925830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.925957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.925983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.926183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.926248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.926522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.926587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.926815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.926852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.926956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.926982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.927185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.927249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.927483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.927522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.927687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.927755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.928000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.928065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-10-28 15:30:08.928382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-10-28 15:30:08.928407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.928712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.928781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.929075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.929140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.929399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.929423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.929530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.929555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.929773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.929800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.929998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.930024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.930169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.930234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.930531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.930596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.930849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.930875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.931073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.931138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.931439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.931503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.931763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.931815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.931981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.932046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.932317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.932381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.932700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.932726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.932882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.932954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.933223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.933287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.933612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.933639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.933905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.933970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.934241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.934306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.934536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.934561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.934704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.934779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.935086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.935151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.935409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.935435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.935591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.935681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.935986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.936051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.936311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.936337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.936597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.936679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.936891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.936917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.937072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.937098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.937360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.937425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.937720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.937788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.938128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.938153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.938360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.938425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.938687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.938755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.938979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.939009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.939281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.939347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.939680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.939747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.940021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-10-28 15:30:08.940047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-10-28 15:30:08.940197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.940262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.940559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.940625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.940959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.941000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.941233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.941298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.941508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.941581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.941883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.941910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.942106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.942173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.942380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.942446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.942714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.942741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.942928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.942994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.943284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.943348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.943612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.943638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.943787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.943853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.944113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.944177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.944489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.944515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.944673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.944718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.944848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.944874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.945037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.945063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.945226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.945299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.945556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.945620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.945940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.945966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.946152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.946218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.946515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.946578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.946877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.946904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.947114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.947180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.947484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.947550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.947859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.947885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.948086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.948152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.948456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.948522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.948796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.948821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.949073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.949139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.949411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.949476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.949666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.949693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.949851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.949903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-10-28 15:30:08.950118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.337 [2024-10-28 15:30:08.950183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.950433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.950458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.950587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.950708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.950951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.951016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.951321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.951346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.951516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.951581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.951858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.951884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.952019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.952044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.952245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.952311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.952575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.952639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.952941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.952967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.953150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.953216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.953519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.953585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.953870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.953896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.954061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.954126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.954437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.954502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.954754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.954780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.954906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.954983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.955221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.955286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.955547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.955574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.955733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.955801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.956041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.956108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.956417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.956441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.956672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.956739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.956986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.957052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.957325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.957352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.957504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.957569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.957822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.957849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.958033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.958057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.958253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.958319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.958569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.958634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.959013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.959038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.959307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.959373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.959648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.959736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.960036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.960062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.960212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.960287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.960570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.960635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.960930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.960971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.961105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.961180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.961425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.961491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.338 [2024-10-28 15:30:08.961742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.338 [2024-10-28 15:30:08.961767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.338 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.961954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.962021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.962234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.962310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.962537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.962562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.962719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.962791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.963013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.963078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.963268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.963294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.963467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.963521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.963768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.963834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.964070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.964095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.964194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.964219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.964409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.964474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.964719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.964745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.964885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.964912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.965131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.965196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.965383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.965407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.965584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.965670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.965833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.965858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.965992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.966032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.966221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.966260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.966406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.966472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.966772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.966799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.966910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.966975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.967268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.967332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.967674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.967701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.967927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.967992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.968229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.968294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.968518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.968559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.968698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.968725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.968987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.969053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.969278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.969303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.969421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.969467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.969742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.969809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.970117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.970158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.970310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.970375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.970669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.970736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.971034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.971058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.971253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.971319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.971594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.971680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.971891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.971917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.972061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.339 [2024-10-28 15:30:08.972087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.339 qpair failed and we were unable to recover it. 00:34:22.339 [2024-10-28 15:30:08.972374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.972439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.972743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.972774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.972976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.973042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.973297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.973361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.973667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.973730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.973945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.974011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.974299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.974365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.974722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.974748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.974994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.975060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.975281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.975346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.975610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.975658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.975867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.975934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.976196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.976260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.976540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.976563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.976753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.976820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.977047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.977112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.977335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.977375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.977546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.977615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.977841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.977907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.978156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.978181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.978330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.978396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.978627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.978711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.978841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.978867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.979020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.979072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.979282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.979346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.979563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.979602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.979773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.979860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.980111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.980176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.980431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.980456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.980671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.980738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.980993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.981056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.981266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.981291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.981431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.981456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.981682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.981749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.981986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.982012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.982106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.982132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.982369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.982433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.982717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.982744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.982874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.982939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.983178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.340 [2024-10-28 15:30:08.983243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.340 qpair failed and we were unable to recover it. 00:34:22.340 [2024-10-28 15:30:08.983438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.983477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.983670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.983747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.983994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.984060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.984368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.984395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.984534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.984570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.984716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.984752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.984878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.984904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.985089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.985139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.985312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.985389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.985518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.985574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.985777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.985803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.985984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.986048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.986347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.986370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.986496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.986561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.986918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.986960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.987120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.987144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.987269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.987294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.987523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.987589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.987862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.987891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.988037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.988102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.988603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.988688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.988898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.988924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.989170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.989235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.989496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.989562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.989829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.989856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.990006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.990070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.990328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.990393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.990607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.990645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.990792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.990818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.990945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.990972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.991219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.991245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.991411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.991436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.991588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.991671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.991905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.991931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.992147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.992213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.992433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.992459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.992564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.992590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.341 [2024-10-28 15:30:08.992789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.341 [2024-10-28 15:30:08.992816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.341 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:08.992977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:08.993003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:08.993157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:08.993223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:08.993512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:08.993577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:08.993815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:08.993848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:08.993966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:08.994013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:08.994250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:08.994314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:08.994508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:08.994532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:08.994661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:08.994688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:08.994848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:08.994883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:08.995092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:08.995117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:08.995315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:08.995380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:08.995583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:08.995648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:08.995882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:08.995909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:08.996065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:08.996131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:08.996384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:08.996449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:08.996719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:08.996747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:08.996917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:08.996954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:08.997188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:08.997254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:08.997482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:08.997507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:08.997655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:08.997681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:08.997826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:08.997861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:08.997975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:08.998001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:08.998158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:08.998184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:08.998392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:08.998457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:08.998635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:08.998683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:08.998821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:08.998848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:08.999001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:08.999067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:08.999286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:08.999310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:08.999513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:08.999585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:08.999787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:08.999814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:08.999923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:08.999965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:09.000106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:09.000176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:09.000390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:09.000460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:09.000666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:09.000704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.342 [2024-10-28 15:30:09.000811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.342 [2024-10-28 15:30:09.000837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.342 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.001030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.001096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.001321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.001345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.001468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.001492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.001714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.001751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.001876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.001902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.002107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.002158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.002384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.002450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.002718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.002745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.002868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.002910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.003089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.003154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.003420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.003460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.003586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.003667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.003844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.003879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.004047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.004071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.004197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.004241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.004496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.004561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.004757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.004785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.004888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.004914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.005070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.005136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.005381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.005414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.005530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.005555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.005745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.005771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.005864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.005891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.006002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.006041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.006194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.006258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.006453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.006478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.006601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.006640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.006770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.006805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.006930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.006956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.007149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.007174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.007389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.007454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.007633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.007687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.007808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.007834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.008021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.008086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.008336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.008361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.008543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.008620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.008831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.008866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.009059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.009082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.009251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.009318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.009492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.343 [2024-10-28 15:30:09.009567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.343 qpair failed and we were unable to recover it. 00:34:22.343 [2024-10-28 15:30:09.009801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.009828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.009998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.010023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.010234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.010296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.010551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.010590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.010736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.010778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.010925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.011005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.011218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.011243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.011380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.011404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.011600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.011700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.011807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.011832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.011970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.011995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.012134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.012197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.012413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.012437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.012611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.012636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.012826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.012862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.012997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.013037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.013206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.013232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.013433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.013495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.013694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.013720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.013861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.013887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.014076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.014141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.015314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.015393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.015676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.015742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.015874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.015900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.015985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.016012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.016177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.016203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.016334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.016378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.016562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.016601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.016709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.016734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.016837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.016879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.017006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.017046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.017186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.017212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.017354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.017380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.344 [2024-10-28 15:30:09.017505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.344 [2024-10-28 15:30:09.017547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.344 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.017731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.017758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.017861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.017887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.018039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.018063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.018237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.018277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.018453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.018479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.018647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.018684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.018795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.018822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.018996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.019022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.019211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.019237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.019409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.019433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.019587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.019610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.019770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.019798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.019915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.019959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.020057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.020081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.020239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.020270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.020447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.020473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.020677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.020716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.020809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.020835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.020971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.020996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.021159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.021224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.021410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.021451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.021588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.021614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.021743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.021770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.021882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.021909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.022109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.022175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.022303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.022353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.022525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.022551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.022710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.022748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.022903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.022976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.023186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.023256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.023404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.023471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.023658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.023721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.023871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.023941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.024152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.024219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.024405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.024432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.024592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.024634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.024798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.024865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.345 [2024-10-28 15:30:09.025049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.345 [2024-10-28 15:30:09.025114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.345 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.025298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.025364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.025494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.025520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.025712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.025780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.025961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.026028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.026160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.026199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.026334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.026361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.026513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.026554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.026678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.026704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.026828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.026894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.027099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.027165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.027341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.027379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.027520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.027546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.027681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.027754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.027904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.027970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.028161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.028227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.028388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.028412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.028522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.028550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.028666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.028692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.028850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.028930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.029112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.029177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.029351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.029374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.029543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.029583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.029715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.029741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.029859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.029922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.030034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.030108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.030220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.030243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.030401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.030426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.030563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.030604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.030745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.030772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.030873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.030900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.031014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.031039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.031169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.031193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.031363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.031404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.346 [2024-10-28 15:30:09.031543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.346 [2024-10-28 15:30:09.031569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.346 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.031720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.031746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.031835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.031861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.032005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.032030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.032220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.032244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.032352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.032377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.032567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.032592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.032740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.032767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.032892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.032919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.033098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.033123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.033276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.033316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.033465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.033492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.033616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.033643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.033746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.033773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.033918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.033983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.034197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.034263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.034443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.034468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.034656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.034694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.034848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.034918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.035136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.035202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.035399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.035459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.035648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.035702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.035888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.035965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.036162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.036203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.036370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.036395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.036561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.036588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.036725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.036761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.036900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.036936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.037141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.037206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.037344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.037369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.037508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.037534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.037677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.037719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.037860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.037896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.038045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.038071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.038220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.347 [2024-10-28 15:30:09.038247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.347 qpair failed and we were unable to recover it. 00:34:22.347 [2024-10-28 15:30:09.038437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.038462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.038628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.038688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.038804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.038831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.038947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.038982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.039154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.039191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.039380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.039406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.039562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.039587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.039761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.039799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.039968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.040033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.040247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.040312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.040536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.040561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.040735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.040771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.040959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.041034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.041238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.041305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.041466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.041491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.041664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.041691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.041871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.041908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.042140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.042204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.042409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.042435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.042573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.042597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.042740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.042786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.042937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.042973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.043156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.043223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.043420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.043446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.043580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.043606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.043782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.043820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.043966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.044011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.044157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.044181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.044347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.044390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.044516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.044541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.044714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.044741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.044844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.044870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.045045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.045070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.045193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.045218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.045343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.045368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.045517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.045540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.045704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.045732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.045839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.045866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.348 qpair failed and we were unable to recover it. 00:34:22.348 [2024-10-28 15:30:09.045993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-10-28 15:30:09.046033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.046150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.046176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.046380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.046405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.046596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.046622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.046738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.046765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.046873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.046899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.047056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.047082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.047228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.047267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.047418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.047447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.047593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.047618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.047753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.047780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.047874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.047900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.048029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.048055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.048216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.048266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.048428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.048453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.048563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.048628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.048791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.048818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.048966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.048991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.049090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.049117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.049254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.049280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.049463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.049528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.049632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.049680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.049775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.049802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.049934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.049974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.050101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.050126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.050236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.050262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.050375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.050400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.050548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.050574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.050710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.050737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.050844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.050870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.050968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.050993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.051169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.051194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.051342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.051367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.051460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.051485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.051614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.051664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.051776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.051802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.051933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.051974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.052117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.052143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.052296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.052321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.052461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.052527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.349 [2024-10-28 15:30:09.052667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-10-28 15:30:09.052705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.349 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.052809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.052836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.052983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.053009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.053117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.053142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.053261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.053287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.053455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.053481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.053684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.053710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.053811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.053838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.054016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.054042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.054189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.054214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.054342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.054367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.054534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.054558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.054706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.054733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.054854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.054881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.054999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.055024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.055187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.055212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.055358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.055383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.055541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.055569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.055678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.055716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.055816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.055843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.055986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.056010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.056151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.056177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.056369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.056395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.056520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.056545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.056656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.056683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.056817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.056845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.057026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.057050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.057170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.057195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.057321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.057346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.057452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.057477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.057621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.057648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.057777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.057804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.057904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.057950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.058053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.058094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.058229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.058254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.058407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.058432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.058518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.058543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.058713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.058741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.058860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.350 [2024-10-28 15:30:09.058888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.350 qpair failed and we were unable to recover it. 00:34:22.350 [2024-10-28 15:30:09.058992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.351 [2024-10-28 15:30:09.059019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.351 qpair failed and we were unable to recover it. 00:34:22.351 [2024-10-28 15:30:09.059115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.351 [2024-10-28 15:30:09.059141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.351 qpair failed and we were unable to recover it. 00:34:22.351 [2024-10-28 15:30:09.059256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.351 [2024-10-28 15:30:09.059281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.351 qpair failed and we were unable to recover it. 00:34:22.351 [2024-10-28 15:30:09.059429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.351 [2024-10-28 15:30:09.059455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.351 qpair failed and we were unable to recover it. 00:34:22.351 [2024-10-28 15:30:09.059609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.351 [2024-10-28 15:30:09.059648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.351 qpair failed and we were unable to recover it. 00:34:22.351 [2024-10-28 15:30:09.059777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.351 [2024-10-28 15:30:09.059805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.351 qpair failed and we were unable to recover it. 00:34:22.351 [2024-10-28 15:30:09.059895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.351 [2024-10-28 15:30:09.059921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.351 qpair failed and we were unable to recover it. 00:34:22.351 [2024-10-28 15:30:09.060023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.351 [2024-10-28 15:30:09.060048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.351 qpair failed and we were unable to recover it. 00:34:22.351 [2024-10-28 15:30:09.060188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.351 [2024-10-28 15:30:09.060213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.351 qpair failed and we were unable to recover it. 00:34:22.351 [2024-10-28 15:30:09.060359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.351 [2024-10-28 15:30:09.060386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.351 qpair failed and we were unable to recover it. 00:34:22.351 [2024-10-28 15:30:09.060528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.351 [2024-10-28 15:30:09.060567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.351 qpair failed and we were unable to recover it. 00:34:22.351 [2024-10-28 15:30:09.060716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.351 [2024-10-28 15:30:09.060744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.351 qpair failed and we were unable to recover it. 00:34:22.351 [2024-10-28 15:30:09.060839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.351 [2024-10-28 15:30:09.060867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.351 qpair failed and we were unable to recover it. 00:34:22.351 [2024-10-28 15:30:09.061003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.351 [2024-10-28 15:30:09.061029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.351 qpair failed and we were unable to recover it. 00:34:22.351 [2024-10-28 15:30:09.061171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.351 [2024-10-28 15:30:09.061210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.351 qpair failed and we were unable to recover it. 00:34:22.351 [2024-10-28 15:30:09.061354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.351 [2024-10-28 15:30:09.061380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.351 qpair failed and we were unable to recover it. 00:34:22.351 [2024-10-28 15:30:09.061527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.351 [2024-10-28 15:30:09.061593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.351 qpair failed and we were unable to recover it. 00:34:22.351 [2024-10-28 15:30:09.061774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.351 [2024-10-28 15:30:09.061801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.351 qpair failed and we were unable to recover it. 00:34:22.351 [2024-10-28 15:30:09.061895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.351 [2024-10-28 15:30:09.061926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.351 qpair failed and we were unable to recover it. 00:34:22.351 [2024-10-28 15:30:09.062087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.351 [2024-10-28 15:30:09.062113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.351 qpair failed and we were unable to recover it. 00:34:22.351 [2024-10-28 15:30:09.062289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.351 [2024-10-28 15:30:09.062314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.351 qpair failed and we were unable to recover it. 00:34:22.351 [2024-10-28 15:30:09.062439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.351 [2024-10-28 15:30:09.062504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.351 qpair failed and we were unable to recover it. 00:34:22.351 [2024-10-28 15:30:09.062621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.351 [2024-10-28 15:30:09.062676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.351 qpair failed and we were unable to recover it. 00:34:22.351 [2024-10-28 15:30:09.062791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.351 [2024-10-28 15:30:09.062818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.351 qpair failed and we were unable to recover it. 00:34:22.351 [2024-10-28 15:30:09.062937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.351 [2024-10-28 15:30:09.062978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.351 qpair failed and we were unable to recover it. 00:34:22.351 [2024-10-28 15:30:09.063152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.351 [2024-10-28 15:30:09.063176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.351 qpair failed and we were unable to recover it. 00:34:22.351 [2024-10-28 15:30:09.063333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.351 [2024-10-28 15:30:09.063372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.351 qpair failed and we were unable to recover it. 00:34:22.351 [2024-10-28 15:30:09.063496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.351 [2024-10-28 15:30:09.063543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.351 qpair failed and we were unable to recover it. 00:34:22.351 [2024-10-28 15:30:09.063683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.351 [2024-10-28 15:30:09.063710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.351 qpair failed and we were unable to recover it. 00:34:22.351 [2024-10-28 15:30:09.063840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.351 [2024-10-28 15:30:09.063867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.351 qpair failed and we were unable to recover it. 00:34:22.351 [2024-10-28 15:30:09.063986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.351 [2024-10-28 15:30:09.064027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.351 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.064235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.064259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.064406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.064432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.064616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.064640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.064799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.064826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.064949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.064990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.065091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.065131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.065259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.065284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.065450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.065515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.065656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.065696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.065800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.065827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.065978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.066019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.066176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.066201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.066335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.066374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.066511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.066537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.066709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.066737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.066868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.066895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.067046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.067072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.067182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.067208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.067352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.067376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.067496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.067520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.067659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.067686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.067820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.067847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.068019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.068045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.068258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.068296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.068437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.068463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.068598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.068638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.068757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.068784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.068885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.068917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.069035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.069074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.069185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.069227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.069368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.069395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.069534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.069574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.069707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.069735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.069846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.069873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.070029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.070054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.070208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.070232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.070371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.070396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.070510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.070536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.070728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.070756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.352 [2024-10-28 15:30:09.070849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.352 [2024-10-28 15:30:09.070876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.352 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.070994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.071033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.071224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.071249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.071369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.071395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.071566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.071591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.071736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.071763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.071888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.071915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.072024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.072064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.072260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.072286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.072415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.072440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.072557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.072582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.072699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.072726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.072853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.072880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.073012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.073037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.073168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.073193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.073349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.073392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.073523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.073563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.073677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.073704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.073825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.073851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.074054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.074079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.074228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.074252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.074393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.074417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.074531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.074571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.074727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.074755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.074865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.074891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.075044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.075070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.075235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.075274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.075372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.075396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.075508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.075537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.075682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.075724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.075853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.075879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.076003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.076030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.076215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.076240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.076424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.076448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.076619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.076644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.076780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.076807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.076903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.076929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.077075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.077115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.077260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.077284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.077426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.077451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.353 [2024-10-28 15:30:09.077592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.353 [2024-10-28 15:30:09.077618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.353 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.077745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.077771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.077880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.077906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.078026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.078052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.078207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.078231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.078376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.078402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.078513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.078538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.078705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.078733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.078828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.078855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.078987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.079027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.079167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.079191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.079373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.079397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.079508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.079547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.079664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.079700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.079822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.079849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.080022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.080062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.080238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.080263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.080421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.080446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.080672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.080698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.080809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.080835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.081029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.081054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.081168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.081206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.081343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.081368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.081554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.081581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.081725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.081752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.081841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.081868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.081969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.082008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.082121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.082147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.082276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.082304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.082421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.082445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.082597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.082623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.082747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.082788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.082946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.082971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.083178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.083204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.083343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.083383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.083510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.083534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.083657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.083685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.083792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.083819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.083962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.083988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.084124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.084165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.084312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.354 [2024-10-28 15:30:09.084353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.354 qpair failed and we were unable to recover it. 00:34:22.354 [2024-10-28 15:30:09.084504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.084547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.084692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.084719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.084828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.084856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.084994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.085019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.085208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.085232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.085355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.085379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.085505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.085529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.085676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.085702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.085835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.085863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.086008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.086035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.086186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.086211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.086394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.086418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.086555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.086580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.086692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.086718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.086805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.086845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.086964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.087003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.087134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.087160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.087335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.087376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.087524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.087548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.087720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.087747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.087872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.087898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.088010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.088050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.088213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.088237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.088422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.088446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.088562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.088602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.088745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.088772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.088899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.088924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.089101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.089132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.089246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.089272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.089391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.089417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.089595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.089621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.089735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.089762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.089859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.089886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.090058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.090084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.090200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.355 [2024-10-28 15:30:09.090225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.355 qpair failed and we were unable to recover it. 00:34:22.355 [2024-10-28 15:30:09.090370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.090412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.090582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.090608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.090742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.090769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.090854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.090881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.090980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.091006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.091128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.091153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.091332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.091372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.091511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.091537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.091684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.091711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.091810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.091836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.091977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.092020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.092174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.092199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.092353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.092378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.092527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.092553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.092704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.092731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.092856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.092883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.093066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.093108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.093197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.093237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.093362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.093387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.093534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.093559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.093682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.093709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.093809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.093836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.093930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.093972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.094079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.094106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.094278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.094305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.094425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.094451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.094558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.094584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.094724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.094751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.094857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.094884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.095044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.095084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.095226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.095250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.095390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.095416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.095606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.095637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.095769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.095801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.095972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.095998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.096113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.096137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.096264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.096289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.096431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.096470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.096613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.356 [2024-10-28 15:30:09.096662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.356 qpair failed and we were unable to recover it. 00:34:22.356 [2024-10-28 15:30:09.096770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.096796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.096894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.096921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.097075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.097101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.097248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.097275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.097461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.097490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.097613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.097698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.097830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.097860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.098008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.098076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.098239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.098304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.098487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.098552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.098748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.098779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.098891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.098921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.099099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.099165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.099400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.099473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.099590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.099615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.099758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.099785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.099877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.099906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.100014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.100078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.100293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.100363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.100530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.100554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.100688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.100715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.100887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.100923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.101153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.101218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.101454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.101518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.101724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.101751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.101854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.101880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.102085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.102149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.102359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.102423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.102707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.102735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.102843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.102869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.103195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.103228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.103403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.103440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.103573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.103599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.103728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.103759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.103860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.103887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.104031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.104067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.104237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.104273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.104412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.104447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.104679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.104725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.104828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.104855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.357 qpair failed and we were unable to recover it. 00:34:22.357 [2024-10-28 15:30:09.104989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.357 [2024-10-28 15:30:09.105014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.105197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.105263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.105503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.105569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.105767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.105794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.105896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.105923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.106064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.106107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.106279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.106346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.106563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.106629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.106810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.106836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.107034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.107100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.107309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.107375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.107611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.107708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.107844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.107871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.108022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.108071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.108293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.108369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.108605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.108736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.108839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.108865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.109032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.109076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.109260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.109326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.109515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.109581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.109811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.109838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.109958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.110005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.110193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.110258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.110523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.110588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.110779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.110806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.110919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.110961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.111163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.111233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.111495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.111560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.111783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.111810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.111954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.112025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.112252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.112317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.112572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.112636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.112844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.112879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.113001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.113075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.113293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.113360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.113614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.113716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.113890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.113916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.114039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.114109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.114369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.114434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.114671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.114726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.358 qpair failed and we were unable to recover it. 00:34:22.358 [2024-10-28 15:30:09.114836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.358 [2024-10-28 15:30:09.114863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.115021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.115080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.115339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.115405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.115625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.115713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.115835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.115861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.115970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.116017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.116156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.116222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.116485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.116551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.116837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.116864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.117012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.117078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.117350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.117416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.117630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.117732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.117872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.117912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.118065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.118103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.118260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.118326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.118543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.118608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.118839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.118866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.118998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.119064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.119281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.119346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.119566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.119632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.119845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.119872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.120048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.120119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.120333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.120399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.120633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.120720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.120859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.120884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.121042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.121113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.121372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.121439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.121678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.121729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.121830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.121856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.122010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.122035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.122281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.122347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.122587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.122685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.122881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.122908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.123053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.123081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.123258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.123323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.123530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.123595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.123832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.123859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.123950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.123976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.124180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.124245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.124460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.359 [2024-10-28 15:30:09.124526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.359 qpair failed and we were unable to recover it. 00:34:22.359 [2024-10-28 15:30:09.124727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.124754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.124877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.124904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.125042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.125108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.125297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.125362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.125624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.125672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.125838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.125873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.126015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.126080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.126346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.126411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.126622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.126647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.126768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.126794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.127042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.127108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.127308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.127372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.127611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.127635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.127770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.127815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.128043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.128108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.128359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.128424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.128694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.128721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.128812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.128869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.129083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.129149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.129361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.129426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.129677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.129704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.129808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.129850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.130089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.130155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.130398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.130464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.130737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.130764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.130864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.130890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.131072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.131143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.131381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.131445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.131690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.131718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.131838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.131865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.132004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.132070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.132314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.132379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.132686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.132733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.132844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.360 [2024-10-28 15:30:09.132875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.360 qpair failed and we were unable to recover it. 00:34:22.360 [2024-10-28 15:30:09.133056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.133122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.133369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.133434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.133656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.133698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.133830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.133897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.134132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.134191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.134470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.134535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.134762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.134789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.134910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.134961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.135141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.135206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.135459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.135525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.135744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.135771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.135960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.136026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.136308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.136373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.136608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.136719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.136964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.136990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.137188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.137253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.137489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.137555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.137784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.137811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.137914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.137941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.138065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.138091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.138268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.138333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.138571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.138637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.138869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.138895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.139034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.139074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.139250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.139315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.139509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.139573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.139874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.139910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.140049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.140115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.140332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.140398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.140669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.140736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.140953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.140979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.141110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.141136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.141345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.141410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.141678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.141745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.141995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.142020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.142246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.142311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.142542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.142608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.142879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.142944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.143207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.143232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.143435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.361 [2024-10-28 15:30:09.143511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.361 qpair failed and we were unable to recover it. 00:34:22.361 [2024-10-28 15:30:09.143739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.143808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.144044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.144110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.144357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.144383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.144549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.144614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.144896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.144963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.145196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.145261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.145460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.145486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.145614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.145641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.145818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.145884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.146112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.146178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.146427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.146467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.147348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.147423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.147699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.147726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.147857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.147885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.147997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.148024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.148157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.148184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.148427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.148492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.148709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.148776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.149019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.149044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.149226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.149292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.149540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.149606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.149849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.149915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.150122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.150149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.150337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.150403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.150632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.150710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.150919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.150984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.151242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.151269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.151417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.151482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.151706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.151774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.152061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.152126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.152373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.152399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.152551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.152615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.152904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.152969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.153185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.153250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.153511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.153537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.153707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.153773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.154021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.154086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.154400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.154466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.154723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.154750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.154901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.154978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.362 qpair failed and we were unable to recover it. 00:34:22.362 [2024-10-28 15:30:09.155226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.362 [2024-10-28 15:30:09.155293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.155518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.155583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.155859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.155887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.156052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.156081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.156214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.156243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.156409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.156474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.156685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.156712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.156841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.156867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.156982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.157047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.157304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.157369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.157602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.157686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.157846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.157872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.158047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.158112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.158382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.158448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.158693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.158720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.158861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.158888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.159056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.159121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.159326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.159392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.159644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.159679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.159825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.159891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.160097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.160126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.160292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.160322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.160459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.160508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.160683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.160742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.160884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.160924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.161071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.161108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.161282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.161321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.161495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.161529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.161697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.161729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.161845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.161878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.162045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.162071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.162195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.162221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.162401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.162431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.162570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.162599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.162785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.162812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.162957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.162986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.163123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.163152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.163286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.163316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.163443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.163494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.163666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.163722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.363 qpair failed and we were unable to recover it. 00:34:22.363 [2024-10-28 15:30:09.163832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.363 [2024-10-28 15:30:09.163860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.364 qpair failed and we were unable to recover it. 00:34:22.364 [2024-10-28 15:30:09.163917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eab530 (9): Bad file descriptor 00:34:22.364 [2024-10-28 15:30:09.164130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.364 [2024-10-28 15:30:09.164170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.364 qpair failed and we were unable to recover it. 00:34:22.364 [2024-10-28 15:30:09.164316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.364 [2024-10-28 15:30:09.164362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.364 qpair failed and we were unable to recover it. 00:34:22.364 [2024-10-28 15:30:09.164487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.364 [2024-10-28 15:30:09.164533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.364 qpair failed and we were unable to recover it. 00:34:22.364 [2024-10-28 15:30:09.164664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.364 [2024-10-28 15:30:09.164692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.364 qpair failed and we were unable to recover it. 00:34:22.364 [2024-10-28 15:30:09.164845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.364 [2024-10-28 15:30:09.164872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.364 qpair failed and we were unable to recover it. 00:34:22.364 [2024-10-28 15:30:09.165044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.364 [2024-10-28 15:30:09.165087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.364 qpair failed and we were unable to recover it. 00:34:22.364 [2024-10-28 15:30:09.165214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.364 [2024-10-28 15:30:09.165240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.364 qpair failed and we were unable to recover it. 00:34:22.364 [2024-10-28 15:30:09.165381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.364 [2024-10-28 15:30:09.165424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.364 qpair failed and we were unable to recover it. 00:34:22.364 [2024-10-28 15:30:09.165557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.364 [2024-10-28 15:30:09.165587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.364 qpair failed and we were unable to recover it. 00:34:22.364 [2024-10-28 15:30:09.165756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.364 [2024-10-28 15:30:09.165783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.364 qpair failed and we were unable to recover it. 00:34:22.364 [2024-10-28 15:30:09.165918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.364 [2024-10-28 15:30:09.165944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.364 qpair failed and we were unable to recover it. 00:34:22.364 [2024-10-28 15:30:09.166076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.364 [2024-10-28 15:30:09.166111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.364 qpair failed and we were unable to recover it. 00:34:22.364 [2024-10-28 15:30:09.166338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.364 [2024-10-28 15:30:09.166365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.364 qpair failed and we were unable to recover it. 00:34:22.364 [2024-10-28 15:30:09.166577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.364 [2024-10-28 15:30:09.166608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.364 qpair failed and we were unable to recover it. 00:34:22.364 [2024-10-28 15:30:09.166741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.364 [2024-10-28 15:30:09.166768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.364 qpair failed and we were unable to recover it. 00:34:22.364 [2024-10-28 15:30:09.166875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.364 [2024-10-28 15:30:09.166902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.364 qpair failed and we were unable to recover it. 00:34:22.364 [2024-10-28 15:30:09.167118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.364 [2024-10-28 15:30:09.167148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.364 qpair failed and we were unable to recover it. 00:34:22.364 [2024-10-28 15:30:09.167302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.364 [2024-10-28 15:30:09.167330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.364 qpair failed and we were unable to recover it. 00:34:22.364 [2024-10-28 15:30:09.167522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.364 [2024-10-28 15:30:09.167551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.364 qpair failed and we were unable to recover it. 00:34:22.364 [2024-10-28 15:30:09.167672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.364 [2024-10-28 15:30:09.167715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.364 qpair failed and we were unable to recover it. 00:34:22.660 [2024-10-28 15:30:09.167878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.660 [2024-10-28 15:30:09.167904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.660 qpair failed and we were unable to recover it. 00:34:22.660 [2024-10-28 15:30:09.168049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.660 [2024-10-28 15:30:09.168081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.660 qpair failed and we were unable to recover it. 00:34:22.660 [2024-10-28 15:30:09.168178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.660 [2024-10-28 15:30:09.168206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.660 qpair failed and we were unable to recover it. 00:34:22.660 [2024-10-28 15:30:09.168362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.660 [2024-10-28 15:30:09.168391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.660 qpair failed and we were unable to recover it. 00:34:22.660 [2024-10-28 15:30:09.168523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.660 [2024-10-28 15:30:09.168562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.660 qpair failed and we were unable to recover it. 00:34:22.660 [2024-10-28 15:30:09.168722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.660 [2024-10-28 15:30:09.168762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.660 qpair failed and we were unable to recover it. 00:34:22.660 [2024-10-28 15:30:09.168946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.660 [2024-10-28 15:30:09.168977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.660 qpair failed and we were unable to recover it. 00:34:22.660 [2024-10-28 15:30:09.169114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.660 [2024-10-28 15:30:09.169144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.660 qpair failed and we were unable to recover it. 00:34:22.660 [2024-10-28 15:30:09.169319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.660 [2024-10-28 15:30:09.169348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.660 qpair failed and we were unable to recover it. 00:34:22.660 [2024-10-28 15:30:09.169497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.660 [2024-10-28 15:30:09.169526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.660 qpair failed and we were unable to recover it. 00:34:22.660 [2024-10-28 15:30:09.169754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.660 [2024-10-28 15:30:09.169782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.660 qpair failed and we were unable to recover it. 00:34:22.660 [2024-10-28 15:30:09.169950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.660 [2024-10-28 15:30:09.169981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.660 qpair failed and we were unable to recover it. 00:34:22.660 [2024-10-28 15:30:09.170138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.660 [2024-10-28 15:30:09.170169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.660 qpair failed and we were unable to recover it. 00:34:22.660 [2024-10-28 15:30:09.170402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.660 [2024-10-28 15:30:09.170432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.660 qpair failed and we were unable to recover it. 00:34:22.660 [2024-10-28 15:30:09.170558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.660 [2024-10-28 15:30:09.170584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.660 qpair failed and we were unable to recover it. 00:34:22.660 [2024-10-28 15:30:09.170762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.660 [2024-10-28 15:30:09.170789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.660 qpair failed and we were unable to recover it. 00:34:22.660 [2024-10-28 15:30:09.170974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.660 [2024-10-28 15:30:09.171031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.660 qpair failed and we were unable to recover it. 00:34:22.660 [2024-10-28 15:30:09.171209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.660 [2024-10-28 15:30:09.171239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.660 qpair failed and we were unable to recover it. 00:34:22.660 [2024-10-28 15:30:09.171449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.660 [2024-10-28 15:30:09.171494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.660 qpair failed and we were unable to recover it. 00:34:22.660 [2024-10-28 15:30:09.171715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.660 [2024-10-28 15:30:09.171743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.661 qpair failed and we were unable to recover it. 00:34:22.661 [2024-10-28 15:30:09.171898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.661 [2024-10-28 15:30:09.171925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.661 qpair failed and we were unable to recover it. 00:34:22.661 [2024-10-28 15:30:09.172141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.661 [2024-10-28 15:30:09.172170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.661 qpair failed and we were unable to recover it. 00:34:22.661 [2024-10-28 15:30:09.172358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.661 [2024-10-28 15:30:09.172391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.661 qpair failed and we were unable to recover it. 00:34:22.661 [2024-10-28 15:30:09.172597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.661 [2024-10-28 15:30:09.172627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.661 qpair failed and we were unable to recover it. 00:34:22.661 [2024-10-28 15:30:09.172817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.661 [2024-10-28 15:30:09.172843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.661 qpair failed and we were unable to recover it. 00:34:22.661 [2024-10-28 15:30:09.173002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.661 [2024-10-28 15:30:09.173032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.661 qpair failed and we were unable to recover it. 00:34:22.661 [2024-10-28 15:30:09.173156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.661 [2024-10-28 15:30:09.173186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.661 qpair failed and we were unable to recover it. 00:34:22.661 [2024-10-28 15:30:09.173315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.661 [2024-10-28 15:30:09.173345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.661 qpair failed and we were unable to recover it. 00:34:22.661 [2024-10-28 15:30:09.173495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.661 [2024-10-28 15:30:09.173529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.661 qpair failed and we were unable to recover it. 00:34:22.661 [2024-10-28 15:30:09.173683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.661 [2024-10-28 15:30:09.173723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.661 qpair failed and we were unable to recover it. 00:34:22.661 [2024-10-28 15:30:09.173861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.661 [2024-10-28 15:30:09.173901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.661 qpair failed and we were unable to recover it. 00:34:22.661 [2024-10-28 15:30:09.174066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.661 [2024-10-28 15:30:09.174103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.661 qpair failed and we were unable to recover it. 00:34:22.661 [2024-10-28 15:30:09.174250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.661 [2024-10-28 15:30:09.174280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.661 qpair failed and we were unable to recover it. 00:34:22.661 [2024-10-28 15:30:09.174458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.661 [2024-10-28 15:30:09.174491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.661 qpair failed and we were unable to recover it. 00:34:22.661 [2024-10-28 15:30:09.174621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.661 [2024-10-28 15:30:09.174659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.661 qpair failed and we were unable to recover it. 00:34:22.661 [2024-10-28 15:30:09.174814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.661 [2024-10-28 15:30:09.174841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.661 qpair failed and we were unable to recover it. 00:34:22.661 [2024-10-28 15:30:09.175021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.661 [2024-10-28 15:30:09.175060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.661 qpair failed and we were unable to recover it. 00:34:22.661 [2024-10-28 15:30:09.175300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.661 [2024-10-28 15:30:09.175329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.661 qpair failed and we were unable to recover it. 00:34:22.661 [2024-10-28 15:30:09.175506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.661 [2024-10-28 15:30:09.175535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.661 qpair failed and we were unable to recover it. 00:34:22.661 [2024-10-28 15:30:09.175708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.661 [2024-10-28 15:30:09.175735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.661 qpair failed and we were unable to recover it. 00:34:22.661 [2024-10-28 15:30:09.175883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.661 [2024-10-28 15:30:09.175910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.661 qpair failed and we were unable to recover it. 00:34:22.661 [2024-10-28 15:30:09.176086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.661 [2024-10-28 15:30:09.176115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.661 qpair failed and we were unable to recover it. 00:34:22.661 [2024-10-28 15:30:09.176311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.661 [2024-10-28 15:30:09.176341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.661 qpair failed and we were unable to recover it. 00:34:22.661 [2024-10-28 15:30:09.176528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.661 [2024-10-28 15:30:09.176558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.661 qpair failed and we were unable to recover it. 00:34:22.661 [2024-10-28 15:30:09.176682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.661 [2024-10-28 15:30:09.176724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.661 qpair failed and we were unable to recover it. 00:34:22.661 [2024-10-28 15:30:09.176845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.661 [2024-10-28 15:30:09.176873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.661 qpair failed and we were unable to recover it. 00:34:22.661 [2024-10-28 15:30:09.177119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.661 [2024-10-28 15:30:09.177148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.661 qpair failed and we were unable to recover it. 00:34:22.661 [2024-10-28 15:30:09.177296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.661 [2024-10-28 15:30:09.177331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.661 qpair failed and we were unable to recover it. 00:34:22.661 [2024-10-28 15:30:09.177460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.661 [2024-10-28 15:30:09.177504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.661 qpair failed and we were unable to recover it. 00:34:22.661 [2024-10-28 15:30:09.177708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.661 [2024-10-28 15:30:09.177735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.661 qpair failed and we were unable to recover it. 00:34:22.661 [2024-10-28 15:30:09.177823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.661 [2024-10-28 15:30:09.177850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.662 qpair failed and we were unable to recover it. 00:34:22.662 [2024-10-28 15:30:09.178027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.662 [2024-10-28 15:30:09.178065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.662 qpair failed and we were unable to recover it. 00:34:22.662 [2024-10-28 15:30:09.178295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.662 [2024-10-28 15:30:09.178325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.662 qpair failed and we were unable to recover it. 00:34:22.662 [2024-10-28 15:30:09.178423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.662 [2024-10-28 15:30:09.178457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.662 qpair failed and we were unable to recover it. 00:34:22.662 [2024-10-28 15:30:09.178588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.662 [2024-10-28 15:30:09.178613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.662 qpair failed and we were unable to recover it. 00:34:22.662 [2024-10-28 15:30:09.178805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.662 [2024-10-28 15:30:09.178845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.662 qpair failed and we were unable to recover it. 00:34:22.662 [2024-10-28 15:30:09.179040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.662 [2024-10-28 15:30:09.179084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.662 qpair failed and we were unable to recover it. 00:34:22.662 [2024-10-28 15:30:09.179260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.662 [2024-10-28 15:30:09.179291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.662 qpair failed and we were unable to recover it. 00:34:22.662 [2024-10-28 15:30:09.179433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.662 [2024-10-28 15:30:09.179466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.662 qpair failed and we were unable to recover it. 00:34:22.662 [2024-10-28 15:30:09.179671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.662 [2024-10-28 15:30:09.179714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.662 qpair failed and we were unable to recover it. 00:34:22.662 [2024-10-28 15:30:09.179905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.662 [2024-10-28 15:30:09.179947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.662 qpair failed and we were unable to recover it. 00:34:22.662 [2024-10-28 15:30:09.180070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.662 [2024-10-28 15:30:09.180107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.662 qpair failed and we were unable to recover it. 00:34:22.662 [2024-10-28 15:30:09.180286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.662 [2024-10-28 15:30:09.180322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.662 qpair failed and we were unable to recover it. 00:34:22.662 [2024-10-28 15:30:09.180440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.662 [2024-10-28 15:30:09.180481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.662 qpair failed and we were unable to recover it. 00:34:22.662 [2024-10-28 15:30:09.180659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.662 [2024-10-28 15:30:09.180706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.662 qpair failed and we were unable to recover it. 00:34:22.662 [2024-10-28 15:30:09.180818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.662 [2024-10-28 15:30:09.180844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.662 qpair failed and we were unable to recover it. 00:34:22.662 [2024-10-28 15:30:09.180992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.662 [2024-10-28 15:30:09.181031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.662 qpair failed and we were unable to recover it. 00:34:22.662 [2024-10-28 15:30:09.181228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.662 [2024-10-28 15:30:09.181272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.662 qpair failed and we were unable to recover it. 00:34:22.662 [2024-10-28 15:30:09.181458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.662 [2024-10-28 15:30:09.181503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.662 qpair failed and we were unable to recover it. 00:34:22.662 [2024-10-28 15:30:09.181667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.662 [2024-10-28 15:30:09.181710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.662 qpair failed and we were unable to recover it. 00:34:22.662 [2024-10-28 15:30:09.181860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.662 [2024-10-28 15:30:09.181887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.662 qpair failed and we were unable to recover it. 00:34:22.662 [2024-10-28 15:30:09.182017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.662 [2024-10-28 15:30:09.182058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.662 qpair failed and we were unable to recover it. 00:34:22.662 [2024-10-28 15:30:09.182228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.662 [2024-10-28 15:30:09.182271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.662 qpair failed and we were unable to recover it. 00:34:22.662 [2024-10-28 15:30:09.182506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.662 [2024-10-28 15:30:09.182549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.662 qpair failed and we were unable to recover it. 00:34:22.662 [2024-10-28 15:30:09.182722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.662 [2024-10-28 15:30:09.182749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.662 qpair failed and we were unable to recover it. 00:34:22.662 [2024-10-28 15:30:09.182920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.662 [2024-10-28 15:30:09.182966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.662 qpair failed and we were unable to recover it. 00:34:22.662 [2024-10-28 15:30:09.183182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.662 [2024-10-28 15:30:09.183226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.662 qpair failed and we were unable to recover it. 00:34:22.662 [2024-10-28 15:30:09.183446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.662 [2024-10-28 15:30:09.183489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.662 qpair failed and we were unable to recover it. 00:34:22.662 [2024-10-28 15:30:09.183667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.662 [2024-10-28 15:30:09.183694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.662 qpair failed and we were unable to recover it. 00:34:22.662 [2024-10-28 15:30:09.183834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.662 [2024-10-28 15:30:09.183877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.662 qpair failed and we were unable to recover it. 00:34:22.662 [2024-10-28 15:30:09.184023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.184067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.184180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.184220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.184419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.184462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.184598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.184624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.184774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.184818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.184959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.185012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.185223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.185265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.185408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.185433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.185629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.185680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.185873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.185917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.186095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.186137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.186304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.186329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.186486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.186511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.186700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.186730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.186905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.186949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.187116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.187158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.187368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.187411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.187601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.187641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.187770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.187818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.187921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.187951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.188105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.188147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.188295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.188338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.188434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.188459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.188630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.188666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.188839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.188883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.189035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.189065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.189204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.189244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.189411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.189436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.189610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.189637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.189757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.189802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.189964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.190005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.190222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-10-28 15:30:09.190264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.663 qpair failed and we were unable to recover it. 00:34:22.663 [2024-10-28 15:30:09.190481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.664 [2024-10-28 15:30:09.190506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.664 qpair failed and we were unable to recover it. 00:34:22.664 [2024-10-28 15:30:09.190716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.664 [2024-10-28 15:30:09.190744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.664 qpair failed and we were unable to recover it. 00:34:22.664 [2024-10-28 15:30:09.190885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.664 [2024-10-28 15:30:09.190933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.664 qpair failed and we were unable to recover it. 00:34:22.664 [2024-10-28 15:30:09.191083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.664 [2024-10-28 15:30:09.191126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.664 qpair failed and we were unable to recover it. 00:34:22.664 [2024-10-28 15:30:09.191275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.664 [2024-10-28 15:30:09.191300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.664 qpair failed and we were unable to recover it. 00:34:22.664 [2024-10-28 15:30:09.191437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.664 [2024-10-28 15:30:09.191463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.664 qpair failed and we were unable to recover it. 00:34:22.664 [2024-10-28 15:30:09.191610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.664 [2024-10-28 15:30:09.191636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.664 qpair failed and we were unable to recover it. 00:34:22.664 [2024-10-28 15:30:09.191800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.664 [2024-10-28 15:30:09.191827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.664 qpair failed and we were unable to recover it. 00:34:22.664 [2024-10-28 15:30:09.191913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.664 [2024-10-28 15:30:09.191954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.664 qpair failed and we were unable to recover it. 00:34:22.664 [2024-10-28 15:30:09.192100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.664 [2024-10-28 15:30:09.192140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.664 qpair failed and we were unable to recover it. 00:34:22.664 [2024-10-28 15:30:09.192313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.664 [2024-10-28 15:30:09.192338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.664 qpair failed and we were unable to recover it. 00:34:22.664 [2024-10-28 15:30:09.192438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.664 [2024-10-28 15:30:09.192463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.664 qpair failed and we were unable to recover it. 00:34:22.664 [2024-10-28 15:30:09.192636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.664 [2024-10-28 15:30:09.192668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.664 qpair failed and we were unable to recover it. 00:34:22.664 [2024-10-28 15:30:09.192813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.664 [2024-10-28 15:30:09.192856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.664 qpair failed and we were unable to recover it. 00:34:22.664 [2024-10-28 15:30:09.193025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.664 [2024-10-28 15:30:09.193067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.664 qpair failed and we were unable to recover it. 00:34:22.664 [2024-10-28 15:30:09.193342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.664 [2024-10-28 15:30:09.193385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.664 qpair failed and we were unable to recover it. 00:34:22.664 [2024-10-28 15:30:09.193542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.664 [2024-10-28 15:30:09.193566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.664 qpair failed and we were unable to recover it. 00:34:22.664 [2024-10-28 15:30:09.193716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.664 [2024-10-28 15:30:09.193761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.664 qpair failed and we were unable to recover it. 00:34:22.664 [2024-10-28 15:30:09.193927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.664 [2024-10-28 15:30:09.193971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.664 qpair failed and we were unable to recover it. 00:34:22.664 [2024-10-28 15:30:09.194134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.664 [2024-10-28 15:30:09.194177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.664 qpair failed and we were unable to recover it. 00:34:22.664 [2024-10-28 15:30:09.194337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.664 [2024-10-28 15:30:09.194379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.664 qpair failed and we were unable to recover it. 00:34:22.664 [2024-10-28 15:30:09.194563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.664 [2024-10-28 15:30:09.194589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.664 qpair failed and we were unable to recover it. 00:34:22.664 [2024-10-28 15:30:09.194720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.664 [2024-10-28 15:30:09.194765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.664 qpair failed and we were unable to recover it. 00:34:22.664 [2024-10-28 15:30:09.194939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.664 [2024-10-28 15:30:09.194983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.664 qpair failed and we were unable to recover it. 00:34:22.664 [2024-10-28 15:30:09.195183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.664 [2024-10-28 15:30:09.195226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.664 qpair failed and we were unable to recover it. 00:34:22.664 [2024-10-28 15:30:09.195375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.664 [2024-10-28 15:30:09.195400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.664 qpair failed and we were unable to recover it. 00:34:22.664 [2024-10-28 15:30:09.195570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.664 [2024-10-28 15:30:09.195599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.664 qpair failed and we were unable to recover it. 00:34:22.664 [2024-10-28 15:30:09.195784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.664 [2024-10-28 15:30:09.195830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.664 qpair failed and we were unable to recover it. 00:34:22.664 [2024-10-28 15:30:09.195985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.664 [2024-10-28 15:30:09.196039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.664 qpair failed and we were unable to recover it. 00:34:22.665 [2024-10-28 15:30:09.196214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.665 [2024-10-28 15:30:09.196257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.665 qpair failed and we were unable to recover it. 00:34:22.665 [2024-10-28 15:30:09.196460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.665 [2024-10-28 15:30:09.196486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.665 qpair failed and we were unable to recover it. 00:34:22.665 [2024-10-28 15:30:09.196670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.665 [2024-10-28 15:30:09.196715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.665 qpair failed and we were unable to recover it. 00:34:22.665 [2024-10-28 15:30:09.196920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.665 [2024-10-28 15:30:09.196963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.665 qpair failed and we were unable to recover it. 00:34:22.665 [2024-10-28 15:30:09.197123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.665 [2024-10-28 15:30:09.197166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.665 qpair failed and we were unable to recover it. 00:34:22.665 [2024-10-28 15:30:09.197334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.665 [2024-10-28 15:30:09.197358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.665 qpair failed and we were unable to recover it. 00:34:22.665 [2024-10-28 15:30:09.197530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.665 [2024-10-28 15:30:09.197556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.665 qpair failed and we were unable to recover it. 00:34:22.665 [2024-10-28 15:30:09.197735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.665 [2024-10-28 15:30:09.197779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.665 qpair failed and we were unable to recover it. 00:34:22.665 [2024-10-28 15:30:09.197908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.665 [2024-10-28 15:30:09.197937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.665 qpair failed and we were unable to recover it. 00:34:22.665 [2024-10-28 15:30:09.198106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.665 [2024-10-28 15:30:09.198152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.665 qpair failed and we were unable to recover it. 00:34:22.665 [2024-10-28 15:30:09.198293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.665 [2024-10-28 15:30:09.198332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.665 qpair failed and we were unable to recover it. 00:34:22.665 [2024-10-28 15:30:09.198508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.665 [2024-10-28 15:30:09.198534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.665 qpair failed and we were unable to recover it. 00:34:22.665 [2024-10-28 15:30:09.198678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.665 [2024-10-28 15:30:09.198705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.665 qpair failed and we were unable to recover it. 00:34:22.665 [2024-10-28 15:30:09.198866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.665 [2024-10-28 15:30:09.198909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.665 qpair failed and we were unable to recover it. 00:34:22.665 [2024-10-28 15:30:09.199080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.665 [2024-10-28 15:30:09.199123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.665 qpair failed and we were unable to recover it. 00:34:22.665 [2024-10-28 15:30:09.199329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.665 [2024-10-28 15:30:09.199354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.665 qpair failed and we were unable to recover it. 00:34:22.665 [2024-10-28 15:30:09.199468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.665 [2024-10-28 15:30:09.199518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.665 qpair failed and we were unable to recover it. 00:34:22.665 [2024-10-28 15:30:09.199667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.665 [2024-10-28 15:30:09.199693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.665 qpair failed and we were unable to recover it. 00:34:22.665 [2024-10-28 15:30:09.199873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.665 [2024-10-28 15:30:09.199916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.665 qpair failed and we were unable to recover it. 00:34:22.665 [2024-10-28 15:30:09.200062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.665 [2024-10-28 15:30:09.200106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.665 qpair failed and we were unable to recover it. 00:34:22.665 [2024-10-28 15:30:09.200245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.665 [2024-10-28 15:30:09.200271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.665 qpair failed and we were unable to recover it. 00:34:22.665 [2024-10-28 15:30:09.200415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.665 [2024-10-28 15:30:09.200440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.665 qpair failed and we were unable to recover it. 00:34:22.665 [2024-10-28 15:30:09.200618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.665 [2024-10-28 15:30:09.200643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.665 qpair failed and we were unable to recover it. 00:34:22.665 [2024-10-28 15:30:09.200812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.665 [2024-10-28 15:30:09.200855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.665 qpair failed and we were unable to recover it. 00:34:22.665 [2024-10-28 15:30:09.201053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.665 [2024-10-28 15:30:09.201096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.665 qpair failed and we were unable to recover it. 00:34:22.665 [2024-10-28 15:30:09.201266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.665 [2024-10-28 15:30:09.201309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.665 qpair failed and we were unable to recover it. 00:34:22.665 [2024-10-28 15:30:09.201492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.665 [2024-10-28 15:30:09.201517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.665 qpair failed and we were unable to recover it. 00:34:22.665 [2024-10-28 15:30:09.201656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.665 [2024-10-28 15:30:09.201683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.665 qpair failed and we were unable to recover it. 00:34:22.666 [2024-10-28 15:30:09.201824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.666 [2024-10-28 15:30:09.201850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.666 qpair failed and we were unable to recover it. 00:34:22.666 [2024-10-28 15:30:09.202028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.666 [2024-10-28 15:30:09.202062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.666 qpair failed and we were unable to recover it. 00:34:22.666 [2024-10-28 15:30:09.202354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.666 [2024-10-28 15:30:09.202400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.666 qpair failed and we were unable to recover it. 00:34:22.666 [2024-10-28 15:30:09.202532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.666 [2024-10-28 15:30:09.202557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.666 qpair failed and we were unable to recover it. 00:34:22.666 [2024-10-28 15:30:09.202741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.666 [2024-10-28 15:30:09.202785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.666 qpair failed and we were unable to recover it. 00:34:22.666 [2024-10-28 15:30:09.203036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.666 [2024-10-28 15:30:09.203079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.666 qpair failed and we were unable to recover it. 00:34:22.666 [2024-10-28 15:30:09.203226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.666 [2024-10-28 15:30:09.203251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.666 qpair failed and we were unable to recover it. 00:34:22.666 [2024-10-28 15:30:09.203393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.666 [2024-10-28 15:30:09.203419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.666 qpair failed and we were unable to recover it. 00:34:22.666 [2024-10-28 15:30:09.203574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.666 [2024-10-28 15:30:09.203621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.666 qpair failed and we were unable to recover it. 00:34:22.666 [2024-10-28 15:30:09.203768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.666 [2024-10-28 15:30:09.203800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.666 qpair failed and we were unable to recover it. 00:34:22.666 [2024-10-28 15:30:09.203905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.666 [2024-10-28 15:30:09.203931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.666 qpair failed and we were unable to recover it. 00:34:22.666 [2024-10-28 15:30:09.204070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.666 [2024-10-28 15:30:09.204096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.666 qpair failed and we were unable to recover it. 00:34:22.666 [2024-10-28 15:30:09.204207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.666 [2024-10-28 15:30:09.204232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.666 qpair failed and we were unable to recover it. 00:34:22.666 [2024-10-28 15:30:09.204358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.666 [2024-10-28 15:30:09.204384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.666 qpair failed and we were unable to recover it. 00:34:22.666 [2024-10-28 15:30:09.204518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.666 [2024-10-28 15:30:09.204543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.666 qpair failed and we were unable to recover it. 00:34:22.666 [2024-10-28 15:30:09.204721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.666 [2024-10-28 15:30:09.204748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.666 qpair failed and we were unable to recover it. 00:34:22.666 [2024-10-28 15:30:09.204901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.666 [2024-10-28 15:30:09.204941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.666 qpair failed and we were unable to recover it. 00:34:22.666 [2024-10-28 15:30:09.205047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.666 [2024-10-28 15:30:09.205073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.666 qpair failed and we were unable to recover it. 00:34:22.666 [2024-10-28 15:30:09.205237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.666 [2024-10-28 15:30:09.205263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.666 qpair failed and we were unable to recover it. 00:34:22.666 [2024-10-28 15:30:09.205389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.666 [2024-10-28 15:30:09.205414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.666 qpair failed and we were unable to recover it. 00:34:22.666 [2024-10-28 15:30:09.205544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.666 [2024-10-28 15:30:09.205569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.666 qpair failed and we were unable to recover it. 00:34:22.666 [2024-10-28 15:30:09.205710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.666 [2024-10-28 15:30:09.205735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.666 qpair failed and we were unable to recover it. 00:34:22.666 [2024-10-28 15:30:09.205858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.666 [2024-10-28 15:30:09.205884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.666 qpair failed and we were unable to recover it. 00:34:22.666 [2024-10-28 15:30:09.206035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.666 [2024-10-28 15:30:09.206061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.666 qpair failed and we were unable to recover it. 00:34:22.666 [2024-10-28 15:30:09.206195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.666 [2024-10-28 15:30:09.206236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.666 qpair failed and we were unable to recover it. 00:34:22.666 [2024-10-28 15:30:09.206400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.206426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.667 qpair failed and we were unable to recover it. 00:34:22.667 [2024-10-28 15:30:09.206558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.206584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.667 qpair failed and we were unable to recover it. 00:34:22.667 [2024-10-28 15:30:09.206758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.206788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.667 qpair failed and we were unable to recover it. 00:34:22.667 [2024-10-28 15:30:09.206966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.206996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.667 qpair failed and we were unable to recover it. 00:34:22.667 [2024-10-28 15:30:09.207145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.207174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.667 qpair failed and we were unable to recover it. 00:34:22.667 [2024-10-28 15:30:09.207321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.207346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.667 qpair failed and we were unable to recover it. 00:34:22.667 [2024-10-28 15:30:09.207469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.207495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.667 qpair failed and we were unable to recover it. 00:34:22.667 [2024-10-28 15:30:09.207638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.207688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.667 qpair failed and we were unable to recover it. 00:34:22.667 [2024-10-28 15:30:09.207814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.207843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.667 qpair failed and we were unable to recover it. 00:34:22.667 [2024-10-28 15:30:09.207976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.208016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.667 qpair failed and we were unable to recover it. 00:34:22.667 [2024-10-28 15:30:09.208166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.208191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.667 qpair failed and we were unable to recover it. 00:34:22.667 [2024-10-28 15:30:09.208321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.208361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.667 qpair failed and we were unable to recover it. 00:34:22.667 [2024-10-28 15:30:09.208540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.208565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.667 qpair failed and we were unable to recover it. 00:34:22.667 [2024-10-28 15:30:09.208706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.208733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.667 qpair failed and we were unable to recover it. 00:34:22.667 [2024-10-28 15:30:09.208849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.208893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.667 qpair failed and we were unable to recover it. 00:34:22.667 [2024-10-28 15:30:09.209064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.209107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.667 qpair failed and we were unable to recover it. 00:34:22.667 [2024-10-28 15:30:09.209231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.209271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.667 qpair failed and we were unable to recover it. 00:34:22.667 [2024-10-28 15:30:09.209455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.209496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.667 qpair failed and we were unable to recover it. 00:34:22.667 [2024-10-28 15:30:09.209676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.209717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.667 qpair failed and we were unable to recover it. 00:34:22.667 [2024-10-28 15:30:09.209816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.209845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.667 qpair failed and we were unable to recover it. 00:34:22.667 [2024-10-28 15:30:09.209988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.210031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.667 qpair failed and we were unable to recover it. 00:34:22.667 [2024-10-28 15:30:09.210205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.210230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.667 qpair failed and we were unable to recover it. 00:34:22.667 [2024-10-28 15:30:09.210378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.210402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.667 qpair failed and we were unable to recover it. 00:34:22.667 [2024-10-28 15:30:09.210513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.210539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.667 qpair failed and we were unable to recover it. 00:34:22.667 [2024-10-28 15:30:09.210722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.210756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.667 qpair failed and we were unable to recover it. 00:34:22.667 [2024-10-28 15:30:09.210873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.210900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.667 qpair failed and we were unable to recover it. 00:34:22.667 [2024-10-28 15:30:09.211076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.211112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.667 qpair failed and we were unable to recover it. 00:34:22.667 [2024-10-28 15:30:09.211235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.211273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.667 qpair failed and we were unable to recover it. 00:34:22.667 [2024-10-28 15:30:09.211410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.211450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.667 qpair failed and we were unable to recover it. 00:34:22.667 [2024-10-28 15:30:09.211633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.211663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.667 qpair failed and we were unable to recover it. 00:34:22.667 [2024-10-28 15:30:09.211803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.211828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.667 qpair failed and we were unable to recover it. 00:34:22.667 [2024-10-28 15:30:09.212000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.667 [2024-10-28 15:30:09.212026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.668 qpair failed and we were unable to recover it. 00:34:22.668 [2024-10-28 15:30:09.212200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.668 [2024-10-28 15:30:09.212230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.668 qpair failed and we were unable to recover it. 00:34:22.668 [2024-10-28 15:30:09.212400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.668 [2024-10-28 15:30:09.212440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.668 qpair failed and we were unable to recover it. 00:34:22.668 [2024-10-28 15:30:09.212546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.668 [2024-10-28 15:30:09.212571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.668 qpair failed and we were unable to recover it. 00:34:22.668 [2024-10-28 15:30:09.212726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.668 [2024-10-28 15:30:09.212771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.668 qpair failed and we were unable to recover it. 00:34:22.668 [2024-10-28 15:30:09.212909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.668 [2024-10-28 15:30:09.212953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.668 qpair failed and we were unable to recover it. 00:34:22.668 [2024-10-28 15:30:09.213075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.668 [2024-10-28 15:30:09.213104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.668 qpair failed and we were unable to recover it. 00:34:22.668 [2024-10-28 15:30:09.213259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.668 [2024-10-28 15:30:09.213285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.668 qpair failed and we were unable to recover it. 00:34:22.668 [2024-10-28 15:30:09.213454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.668 [2024-10-28 15:30:09.213493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.668 qpair failed and we were unable to recover it. 00:34:22.668 [2024-10-28 15:30:09.213635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.668 [2024-10-28 15:30:09.213684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.668 qpair failed and we were unable to recover it. 00:34:22.668 [2024-10-28 15:30:09.213778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.668 [2024-10-28 15:30:09.213805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.668 qpair failed and we were unable to recover it. 00:34:22.668 [2024-10-28 15:30:09.213960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.668 [2024-10-28 15:30:09.214001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.668 qpair failed and we were unable to recover it. 00:34:22.668 [2024-10-28 15:30:09.214153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.668 [2024-10-28 15:30:09.214177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.668 qpair failed and we were unable to recover it. 00:34:22.668 [2024-10-28 15:30:09.214306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.668 [2024-10-28 15:30:09.214331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.668 qpair failed and we were unable to recover it. 00:34:22.668 [2024-10-28 15:30:09.214493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.668 [2024-10-28 15:30:09.214533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.668 qpair failed and we were unable to recover it. 00:34:22.668 [2024-10-28 15:30:09.214684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.668 [2024-10-28 15:30:09.214718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.668 qpair failed and we were unable to recover it. 00:34:22.668 [2024-10-28 15:30:09.214863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.668 [2024-10-28 15:30:09.214888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.668 qpair failed and we were unable to recover it. 00:34:22.668 [2024-10-28 15:30:09.215004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.668 [2024-10-28 15:30:09.215030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.668 qpair failed and we were unable to recover it. 00:34:22.668 [2024-10-28 15:30:09.215175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.668 [2024-10-28 15:30:09.215200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.668 qpair failed and we were unable to recover it. 00:34:22.668 [2024-10-28 15:30:09.215361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.668 [2024-10-28 15:30:09.215386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.668 qpair failed and we were unable to recover it. 00:34:22.668 [2024-10-28 15:30:09.215525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.668 [2024-10-28 15:30:09.215551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.668 qpair failed and we were unable to recover it. 00:34:22.668 [2024-10-28 15:30:09.215712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.668 [2024-10-28 15:30:09.215738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.668 qpair failed and we were unable to recover it. 00:34:22.668 [2024-10-28 15:30:09.215834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.668 [2024-10-28 15:30:09.215862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.668 qpair failed and we were unable to recover it. 00:34:22.668 [2024-10-28 15:30:09.216007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.668 [2024-10-28 15:30:09.216033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.668 qpair failed and we were unable to recover it. 00:34:22.668 [2024-10-28 15:30:09.216213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.668 [2024-10-28 15:30:09.216238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.668 qpair failed and we were unable to recover it. 00:34:22.668 [2024-10-28 15:30:09.216432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.668 [2024-10-28 15:30:09.216457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.668 qpair failed and we were unable to recover it. 00:34:22.668 [2024-10-28 15:30:09.216600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.668 [2024-10-28 15:30:09.216625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.668 qpair failed and we were unable to recover it. 00:34:22.668 [2024-10-28 15:30:09.216804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.668 [2024-10-28 15:30:09.216831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.668 qpair failed and we were unable to recover it. 00:34:22.668 [2024-10-28 15:30:09.216921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.668 [2024-10-28 15:30:09.216947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.668 qpair failed and we were unable to recover it. 00:34:22.668 [2024-10-28 15:30:09.217088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.668 [2024-10-28 15:30:09.217113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.668 qpair failed and we were unable to recover it. 00:34:22.668 [2024-10-28 15:30:09.217256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.668 [2024-10-28 15:30:09.217282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.669 [2024-10-28 15:30:09.217426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.669 [2024-10-28 15:30:09.217452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.669 [2024-10-28 15:30:09.217586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.669 [2024-10-28 15:30:09.217612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.669 [2024-10-28 15:30:09.217756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.669 [2024-10-28 15:30:09.217787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.669 [2024-10-28 15:30:09.217909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.669 [2024-10-28 15:30:09.217952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.669 [2024-10-28 15:30:09.218115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.669 [2024-10-28 15:30:09.218139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.669 [2024-10-28 15:30:09.218312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.669 [2024-10-28 15:30:09.218338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.669 [2024-10-28 15:30:09.218491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.669 [2024-10-28 15:30:09.218516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.669 [2024-10-28 15:30:09.218680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.669 [2024-10-28 15:30:09.218707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.669 [2024-10-28 15:30:09.218798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.669 [2024-10-28 15:30:09.218824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.669 [2024-10-28 15:30:09.218938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.669 [2024-10-28 15:30:09.218963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.669 [2024-10-28 15:30:09.219150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.669 [2024-10-28 15:30:09.219175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.669 [2024-10-28 15:30:09.219386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.669 [2024-10-28 15:30:09.219411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.669 [2024-10-28 15:30:09.219573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.669 [2024-10-28 15:30:09.219598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.669 [2024-10-28 15:30:09.219732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.669 [2024-10-28 15:30:09.219776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.669 [2024-10-28 15:30:09.219880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.669 [2024-10-28 15:30:09.219924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.669 [2024-10-28 15:30:09.220082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.669 [2024-10-28 15:30:09.220125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.669 [2024-10-28 15:30:09.220225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.669 [2024-10-28 15:30:09.220265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.669 [2024-10-28 15:30:09.220396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.669 [2024-10-28 15:30:09.220422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.669 [2024-10-28 15:30:09.220560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.669 [2024-10-28 15:30:09.220586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.669 [2024-10-28 15:30:09.220729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.669 [2024-10-28 15:30:09.220783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.669 [2024-10-28 15:30:09.220902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.669 [2024-10-28 15:30:09.220929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.669 [2024-10-28 15:30:09.221107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.669 [2024-10-28 15:30:09.221159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.669 [2024-10-28 15:30:09.221339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.669 [2024-10-28 15:30:09.221363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.669 [2024-10-28 15:30:09.221535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.669 [2024-10-28 15:30:09.221560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.669 [2024-10-28 15:30:09.221687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.669 [2024-10-28 15:30:09.221731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.669 [2024-10-28 15:30:09.221830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.669 [2024-10-28 15:30:09.221855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.669 [2024-10-28 15:30:09.222053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.669 [2024-10-28 15:30:09.222079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.669 [2024-10-28 15:30:09.222189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.669 [2024-10-28 15:30:09.222230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.669 [2024-10-28 15:30:09.222368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.669 [2024-10-28 15:30:09.222397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.669 [2024-10-28 15:30:09.222577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.669 [2024-10-28 15:30:09.222607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.669 [2024-10-28 15:30:09.222740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.669 [2024-10-28 15:30:09.222767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.669 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.222889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.222915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.670 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.223104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.223144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.670 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.223308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.223337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.670 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.223491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.223520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.670 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.223709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.223736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.670 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.223873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.223898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.670 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.224067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.224096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.670 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.224212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.224240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.670 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.224408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.224437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.670 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.224553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.224582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.670 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.224726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.224753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.670 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.224855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.224886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.670 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.225031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.225071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.670 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.225201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.225227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.670 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.225402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.225430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.670 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.225538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.225563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.670 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.225716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.225743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.670 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.225832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.225858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.670 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.226011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.226037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.670 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.226164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.226207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.670 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.226363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.226399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.670 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.226500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.226525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.670 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.226715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.226742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.670 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.226844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.226870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.670 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.227033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.227062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.670 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.227228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.227257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.670 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.227358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.227387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.670 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.227590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.227645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.670 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.227801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.227830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.670 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.228009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.228035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.670 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.228219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.228262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.670 qpair failed and we were unable to recover it. 00:34:22.670 [2024-10-28 15:30:09.228426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.670 [2024-10-28 15:30:09.228469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.671 qpair failed and we were unable to recover it. 00:34:22.671 [2024-10-28 15:30:09.228600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.671 [2024-10-28 15:30:09.228640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.671 qpair failed and we were unable to recover it. 00:34:22.671 [2024-10-28 15:30:09.228811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.671 [2024-10-28 15:30:09.228839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.671 qpair failed and we were unable to recover it. 00:34:22.671 [2024-10-28 15:30:09.228976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.671 [2024-10-28 15:30:09.229019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.671 qpair failed and we were unable to recover it. 00:34:22.671 [2024-10-28 15:30:09.229175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.671 [2024-10-28 15:30:09.229201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.671 qpair failed and we were unable to recover it. 00:34:22.671 [2024-10-28 15:30:09.229362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.671 [2024-10-28 15:30:09.229403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.671 qpair failed and we were unable to recover it. 00:34:22.671 [2024-10-28 15:30:09.229545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.671 [2024-10-28 15:30:09.229570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.671 qpair failed and we were unable to recover it. 00:34:22.671 [2024-10-28 15:30:09.229727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.671 [2024-10-28 15:30:09.229769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.671 qpair failed and we were unable to recover it. 00:34:22.671 [2024-10-28 15:30:09.229911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.671 [2024-10-28 15:30:09.229941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.671 qpair failed and we were unable to recover it. 00:34:22.671 [2024-10-28 15:30:09.230048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.671 [2024-10-28 15:30:09.230076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.671 qpair failed and we were unable to recover it. 00:34:22.671 [2024-10-28 15:30:09.230224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.671 [2024-10-28 15:30:09.230252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.671 qpair failed and we were unable to recover it. 00:34:22.671 [2024-10-28 15:30:09.230427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.671 [2024-10-28 15:30:09.230472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.671 qpair failed and we were unable to recover it. 00:34:22.671 [2024-10-28 15:30:09.230606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.671 [2024-10-28 15:30:09.230632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.671 qpair failed and we were unable to recover it. 00:34:22.671 [2024-10-28 15:30:09.230779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.671 [2024-10-28 15:30:09.230806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.671 qpair failed and we were unable to recover it. 00:34:22.671 [2024-10-28 15:30:09.230913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.671 [2024-10-28 15:30:09.230942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.671 qpair failed and we were unable to recover it. 00:34:22.671 [2024-10-28 15:30:09.231112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.671 [2024-10-28 15:30:09.231165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.671 qpair failed and we were unable to recover it. 00:34:22.671 [2024-10-28 15:30:09.231302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.671 [2024-10-28 15:30:09.231331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:22.671 qpair failed and we were unable to recover it. 00:34:22.671 [2024-10-28 15:30:09.231496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.671 [2024-10-28 15:30:09.231523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.671 qpair failed and we were unable to recover it. 00:34:22.671 [2024-10-28 15:30:09.231665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.671 [2024-10-28 15:30:09.231701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.671 qpair failed and we were unable to recover it. 00:34:22.671 [2024-10-28 15:30:09.231804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.671 [2024-10-28 15:30:09.231831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.671 qpair failed and we were unable to recover it. 00:34:22.671 [2024-10-28 15:30:09.231983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.671 [2024-10-28 15:30:09.232012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.671 qpair failed and we were unable to recover it. 00:34:22.671 [2024-10-28 15:30:09.232170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.671 [2024-10-28 15:30:09.232195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.671 qpair failed and we were unable to recover it. 00:34:22.671 [2024-10-28 15:30:09.232337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.671 [2024-10-28 15:30:09.232363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.671 qpair failed and we were unable to recover it. 00:34:22.671 [2024-10-28 15:30:09.232559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.671 [2024-10-28 15:30:09.232588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.671 qpair failed and we were unable to recover it. 00:34:22.671 [2024-10-28 15:30:09.232736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.672 [2024-10-28 15:30:09.232763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.672 qpair failed and we were unable to recover it. 00:34:22.672 [2024-10-28 15:30:09.232854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.672 [2024-10-28 15:30:09.232881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.672 qpair failed and we were unable to recover it. 00:34:22.672 [2024-10-28 15:30:09.233018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.672 [2024-10-28 15:30:09.233044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.672 qpair failed and we were unable to recover it. 00:34:22.672 [2024-10-28 15:30:09.233173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.672 [2024-10-28 15:30:09.233216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.672 qpair failed and we were unable to recover it. 00:34:22.672 [2024-10-28 15:30:09.233346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.672 [2024-10-28 15:30:09.233375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.672 qpair failed and we were unable to recover it. 00:34:22.672 [2024-10-28 15:30:09.233536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.672 [2024-10-28 15:30:09.233566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.672 qpair failed and we were unable to recover it. 00:34:22.672 [2024-10-28 15:30:09.233682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.672 [2024-10-28 15:30:09.233724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.672 qpair failed and we were unable to recover it. 00:34:22.672 [2024-10-28 15:30:09.233840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.672 [2024-10-28 15:30:09.233867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.672 qpair failed and we were unable to recover it. 00:34:22.672 [2024-10-28 15:30:09.233978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.672 [2024-10-28 15:30:09.234003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.672 qpair failed and we were unable to recover it. 00:34:22.672 [2024-10-28 15:30:09.234148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.672 [2024-10-28 15:30:09.234188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.672 qpair failed and we were unable to recover it. 00:34:22.672 [2024-10-28 15:30:09.234358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.672 [2024-10-28 15:30:09.234396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.672 qpair failed and we were unable to recover it. 00:34:22.672 [2024-10-28 15:30:09.234578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.672 [2024-10-28 15:30:09.234608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.672 qpair failed and we were unable to recover it. 00:34:22.672 [2024-10-28 15:30:09.234740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.672 [2024-10-28 15:30:09.234766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.672 qpair failed and we were unable to recover it. 00:34:22.672 [2024-10-28 15:30:09.234888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.672 [2024-10-28 15:30:09.234917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.672 qpair failed and we were unable to recover it. 00:34:22.672 [2024-10-28 15:30:09.235056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.672 [2024-10-28 15:30:09.235096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.672 qpair failed and we were unable to recover it. 00:34:22.672 [2024-10-28 15:30:09.235202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.672 [2024-10-28 15:30:09.235227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.672 qpair failed and we were unable to recover it. 00:34:22.672 [2024-10-28 15:30:09.235397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.672 [2024-10-28 15:30:09.235425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.672 qpair failed and we were unable to recover it. 00:34:22.672 [2024-10-28 15:30:09.235582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.672 [2024-10-28 15:30:09.235611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.672 qpair failed and we were unable to recover it. 00:34:22.672 [2024-10-28 15:30:09.235736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.672 [2024-10-28 15:30:09.235763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.672 qpair failed and we were unable to recover it. 00:34:22.672 [2024-10-28 15:30:09.235888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.672 [2024-10-28 15:30:09.235914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.672 qpair failed and we were unable to recover it. 00:34:22.672 [2024-10-28 15:30:09.236096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.672 [2024-10-28 15:30:09.236121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.672 qpair failed and we were unable to recover it. 00:34:22.672 [2024-10-28 15:30:09.236212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.672 [2024-10-28 15:30:09.236237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.672 qpair failed and we were unable to recover it. 00:34:22.672 [2024-10-28 15:30:09.236394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.672 [2024-10-28 15:30:09.236423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.672 qpair failed and we were unable to recover it. 00:34:22.672 [2024-10-28 15:30:09.236584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.672 [2024-10-28 15:30:09.236614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.672 qpair failed and we were unable to recover it. 00:34:22.672 [2024-10-28 15:30:09.236768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.672 [2024-10-28 15:30:09.236808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.672 qpair failed and we were unable to recover it. 00:34:22.672 [2024-10-28 15:30:09.236913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.672 [2024-10-28 15:30:09.236958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.672 qpair failed and we were unable to recover it. 00:34:22.672 [2024-10-28 15:30:09.237100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.672 [2024-10-28 15:30:09.237142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.672 qpair failed and we were unable to recover it. 00:34:22.672 [2024-10-28 15:30:09.237313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.672 [2024-10-28 15:30:09.237342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.672 qpair failed and we were unable to recover it. 00:34:22.672 [2024-10-28 15:30:09.237480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.672 [2024-10-28 15:30:09.237509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.672 qpair failed and we were unable to recover it. 00:34:22.672 [2024-10-28 15:30:09.237666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.672 [2024-10-28 15:30:09.237693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.672 qpair failed and we were unable to recover it. 00:34:22.672 [2024-10-28 15:30:09.237791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.672 [2024-10-28 15:30:09.237817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.672 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.237899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.673 [2024-10-28 15:30:09.237925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.673 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.238072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.673 [2024-10-28 15:30:09.238098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.673 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.238227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.673 [2024-10-28 15:30:09.238252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.673 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.238495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.673 [2024-10-28 15:30:09.238523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.673 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.238729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.673 [2024-10-28 15:30:09.238756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.673 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.238853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.673 [2024-10-28 15:30:09.238885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.673 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.239039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.673 [2024-10-28 15:30:09.239068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.673 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.239260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.673 [2024-10-28 15:30:09.239288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.673 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.239527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.673 [2024-10-28 15:30:09.239556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.673 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.239704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.673 [2024-10-28 15:30:09.239730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.673 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.239842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.673 [2024-10-28 15:30:09.239868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.673 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.239986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.673 [2024-10-28 15:30:09.240031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.673 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.240272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.673 [2024-10-28 15:30:09.240301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.673 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.240562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.673 [2024-10-28 15:30:09.240593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.673 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.240771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.673 [2024-10-28 15:30:09.240797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.673 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.240901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.673 [2024-10-28 15:30:09.240941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.673 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.241055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.673 [2024-10-28 15:30:09.241081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.673 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.241267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.673 [2024-10-28 15:30:09.241311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.673 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.241485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.673 [2024-10-28 15:30:09.241514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.673 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.241688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.673 [2024-10-28 15:30:09.241714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.673 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.241825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.673 [2024-10-28 15:30:09.241851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.673 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.241988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.673 [2024-10-28 15:30:09.242017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.673 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.242191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.673 [2024-10-28 15:30:09.242227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.673 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.242459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.673 [2024-10-28 15:30:09.242488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.673 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.242657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.673 [2024-10-28 15:30:09.242712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.673 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.242808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.673 [2024-10-28 15:30:09.242834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.673 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.243006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.673 [2024-10-28 15:30:09.243050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.673 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.243193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.673 [2024-10-28 15:30:09.243222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.673 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.243341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.673 [2024-10-28 15:30:09.243384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.673 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.243513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.673 [2024-10-28 15:30:09.243541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.673 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.243706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.673 [2024-10-28 15:30:09.243733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.673 qpair failed and we were unable to recover it. 00:34:22.673 [2024-10-28 15:30:09.243863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.674 [2024-10-28 15:30:09.243900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.674 qpair failed and we were unable to recover it. 00:34:22.674 [2024-10-28 15:30:09.244118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.674 [2024-10-28 15:30:09.244147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.674 qpair failed and we were unable to recover it. 00:34:22.674 [2024-10-28 15:30:09.244323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.674 [2024-10-28 15:30:09.244352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.674 qpair failed and we were unable to recover it. 00:34:22.674 [2024-10-28 15:30:09.244526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.674 [2024-10-28 15:30:09.244567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.674 qpair failed and we were unable to recover it. 00:34:22.674 [2024-10-28 15:30:09.244734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.674 [2024-10-28 15:30:09.244761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.674 qpair failed and we were unable to recover it. 00:34:22.674 [2024-10-28 15:30:09.244863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.674 [2024-10-28 15:30:09.244889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.674 qpair failed and we were unable to recover it. 00:34:22.674 [2024-10-28 15:30:09.245036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.674 [2024-10-28 15:30:09.245061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.674 qpair failed and we were unable to recover it. 00:34:22.674 [2024-10-28 15:30:09.245192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.674 [2024-10-28 15:30:09.245217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.674 qpair failed and we were unable to recover it. 00:34:22.674 [2024-10-28 15:30:09.245363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.674 [2024-10-28 15:30:09.245392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.674 qpair failed and we were unable to recover it. 00:34:22.674 [2024-10-28 15:30:09.245557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.674 [2024-10-28 15:30:09.245586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.674 qpair failed and we were unable to recover it. 00:34:22.674 [2024-10-28 15:30:09.245721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.674 [2024-10-28 15:30:09.245748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.674 qpair failed and we were unable to recover it. 00:34:22.674 [2024-10-28 15:30:09.245850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.674 [2024-10-28 15:30:09.245877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.674 qpair failed and we were unable to recover it. 00:34:22.674 [2024-10-28 15:30:09.245996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.674 [2024-10-28 15:30:09.246021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.674 qpair failed and we were unable to recover it. 00:34:22.674 [2024-10-28 15:30:09.246182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.674 [2024-10-28 15:30:09.246211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.674 qpair failed and we were unable to recover it. 00:34:22.674 [2024-10-28 15:30:09.246388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.674 [2024-10-28 15:30:09.246422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.674 qpair failed and we were unable to recover it. 00:34:22.674 [2024-10-28 15:30:09.246553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.674 [2024-10-28 15:30:09.246596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.674 qpair failed and we were unable to recover it. 00:34:22.674 [2024-10-28 15:30:09.246752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.674 [2024-10-28 15:30:09.246777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.674 qpair failed and we were unable to recover it. 00:34:22.674 [2024-10-28 15:30:09.246893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.674 [2024-10-28 15:30:09.246936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.674 qpair failed and we were unable to recover it. 00:34:22.674 [2024-10-28 15:30:09.247073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.674 [2024-10-28 15:30:09.247098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.674 qpair failed and we were unable to recover it. 00:34:22.674 [2024-10-28 15:30:09.247305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.674 [2024-10-28 15:30:09.247334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.674 qpair failed and we were unable to recover it. 00:34:22.674 [2024-10-28 15:30:09.247487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.674 [2024-10-28 15:30:09.247517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.674 qpair failed and we were unable to recover it. 00:34:22.674 [2024-10-28 15:30:09.247624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.674 [2024-10-28 15:30:09.247655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.674 qpair failed and we were unable to recover it. 00:34:22.674 [2024-10-28 15:30:09.247756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.674 [2024-10-28 15:30:09.247782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.674 qpair failed and we were unable to recover it. 00:34:22.674 [2024-10-28 15:30:09.247903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.674 [2024-10-28 15:30:09.247945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.674 qpair failed and we were unable to recover it. 00:34:22.674 [2024-10-28 15:30:09.248108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.674 [2024-10-28 15:30:09.248143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.674 qpair failed and we were unable to recover it. 00:34:22.674 [2024-10-28 15:30:09.248324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.674 [2024-10-28 15:30:09.248353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.674 qpair failed and we were unable to recover it. 00:34:22.674 [2024-10-28 15:30:09.248459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.674 [2024-10-28 15:30:09.248488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.674 qpair failed and we were unable to recover it. 00:34:22.674 [2024-10-28 15:30:09.248624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.674 [2024-10-28 15:30:09.248655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.674 qpair failed and we were unable to recover it. 00:34:22.674 [2024-10-28 15:30:09.248767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.674 [2024-10-28 15:30:09.248794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.674 qpair failed and we were unable to recover it. 00:34:22.674 [2024-10-28 15:30:09.248894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.674 [2024-10-28 15:30:09.248919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.674 qpair failed and we were unable to recover it. 00:34:22.674 [2024-10-28 15:30:09.249066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.674 [2024-10-28 15:30:09.249105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.674 qpair failed and we were unable to recover it. 00:34:22.674 [2024-10-28 15:30:09.249238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.675 [2024-10-28 15:30:09.249269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.675 qpair failed and we were unable to recover it. 00:34:22.675 [2024-10-28 15:30:09.249439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.675 [2024-10-28 15:30:09.249468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.675 qpair failed and we were unable to recover it. 00:34:22.675 [2024-10-28 15:30:09.249697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.675 [2024-10-28 15:30:09.249724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.675 qpair failed and we were unable to recover it. 00:34:22.675 [2024-10-28 15:30:09.249830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.675 [2024-10-28 15:30:09.249856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.675 qpair failed and we were unable to recover it. 00:34:22.675 [2024-10-28 15:30:09.249990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.675 [2024-10-28 15:30:09.250019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.675 qpair failed and we were unable to recover it. 00:34:22.675 [2024-10-28 15:30:09.250259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.675 [2024-10-28 15:30:09.250284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.675 qpair failed and we were unable to recover it. 00:34:22.675 [2024-10-28 15:30:09.250443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.675 [2024-10-28 15:30:09.250472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.675 qpair failed and we were unable to recover it. 00:34:22.675 [2024-10-28 15:30:09.250623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.675 [2024-10-28 15:30:09.250658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.675 qpair failed and we were unable to recover it. 00:34:22.675 [2024-10-28 15:30:09.250781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.675 [2024-10-28 15:30:09.250807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.675 qpair failed and we were unable to recover it. 00:34:22.675 [2024-10-28 15:30:09.250976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.675 [2024-10-28 15:30:09.251001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.675 qpair failed and we were unable to recover it. 00:34:22.675 [2024-10-28 15:30:09.251158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.675 [2024-10-28 15:30:09.251188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.675 qpair failed and we were unable to recover it. 00:34:22.675 [2024-10-28 15:30:09.251371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.675 [2024-10-28 15:30:09.251395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.675 qpair failed and we were unable to recover it. 00:34:22.675 [2024-10-28 15:30:09.251589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.675 [2024-10-28 15:30:09.251617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.675 qpair failed and we were unable to recover it. 00:34:22.675 [2024-10-28 15:30:09.251749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.675 [2024-10-28 15:30:09.251777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.675 qpair failed and we were unable to recover it. 00:34:22.675 [2024-10-28 15:30:09.251888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.675 [2024-10-28 15:30:09.251920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.675 qpair failed and we were unable to recover it. 00:34:22.675 [2024-10-28 15:30:09.252039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.675 [2024-10-28 15:30:09.252064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.675 qpair failed and we were unable to recover it. 00:34:22.675 [2024-10-28 15:30:09.252293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.675 [2024-10-28 15:30:09.252322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.675 qpair failed and we were unable to recover it. 00:34:22.675 [2024-10-28 15:30:09.252463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.675 [2024-10-28 15:30:09.252502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.675 qpair failed and we were unable to recover it. 00:34:22.675 [2024-10-28 15:30:09.252661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.675 [2024-10-28 15:30:09.252692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.675 qpair failed and we were unable to recover it. 00:34:22.675 [2024-10-28 15:30:09.252793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.675 [2024-10-28 15:30:09.252819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.675 qpair failed and we were unable to recover it. 00:34:22.675 [2024-10-28 15:30:09.253032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.675 [2024-10-28 15:30:09.253058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.675 qpair failed and we were unable to recover it. 00:34:22.675 [2024-10-28 15:30:09.253217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.675 [2024-10-28 15:30:09.253246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.675 qpair failed and we were unable to recover it. 00:34:22.675 [2024-10-28 15:30:09.253373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.675 [2024-10-28 15:30:09.253402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.675 qpair failed and we were unable to recover it. 00:34:22.675 [2024-10-28 15:30:09.253610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.675 [2024-10-28 15:30:09.253643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.675 qpair failed and we were unable to recover it. 00:34:22.675 [2024-10-28 15:30:09.253788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.675 [2024-10-28 15:30:09.253814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.675 qpair failed and we were unable to recover it. 00:34:22.675 [2024-10-28 15:30:09.253967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.675 [2024-10-28 15:30:09.253992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.675 qpair failed and we were unable to recover it. 00:34:22.675 [2024-10-28 15:30:09.254163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.675 [2024-10-28 15:30:09.254192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.675 qpair failed and we were unable to recover it. 00:34:22.675 [2024-10-28 15:30:09.254362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.675 [2024-10-28 15:30:09.254386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.675 qpair failed and we were unable to recover it. 00:34:22.675 [2024-10-28 15:30:09.254539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.675 [2024-10-28 15:30:09.254568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.675 qpair failed and we were unable to recover it. 00:34:22.675 [2024-10-28 15:30:09.254703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.675 [2024-10-28 15:30:09.254733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.675 qpair failed and we were unable to recover it. 00:34:22.675 [2024-10-28 15:30:09.254839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.675 [2024-10-28 15:30:09.254865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.676 qpair failed and we were unable to recover it. 00:34:22.676 [2024-10-28 15:30:09.255006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.676 [2024-10-28 15:30:09.255031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.676 qpair failed and we were unable to recover it. 00:34:22.676 [2024-10-28 15:30:09.255163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.676 [2024-10-28 15:30:09.255192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.676 qpair failed and we were unable to recover it. 00:34:22.676 [2024-10-28 15:30:09.255332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.676 [2024-10-28 15:30:09.255358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.676 qpair failed and we were unable to recover it. 00:34:22.676 [2024-10-28 15:30:09.255539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.676 [2024-10-28 15:30:09.255568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.676 qpair failed and we were unable to recover it. 00:34:22.676 [2024-10-28 15:30:09.255730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.676 [2024-10-28 15:30:09.255759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.676 qpair failed and we were unable to recover it. 00:34:22.676 [2024-10-28 15:30:09.255878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.676 [2024-10-28 15:30:09.255903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.676 qpair failed and we were unable to recover it. 00:34:22.676 [2024-10-28 15:30:09.256047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.676 [2024-10-28 15:30:09.256081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.676 qpair failed and we were unable to recover it. 00:34:22.676 [2024-10-28 15:30:09.256229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.676 [2024-10-28 15:30:09.256258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.676 qpair failed and we were unable to recover it. 00:34:22.676 [2024-10-28 15:30:09.256389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.676 [2024-10-28 15:30:09.256414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.676 qpair failed and we were unable to recover it. 00:34:22.676 [2024-10-28 15:30:09.256643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.676 [2024-10-28 15:30:09.256678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.676 qpair failed and we were unable to recover it. 00:34:22.676 [2024-10-28 15:30:09.256810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.676 [2024-10-28 15:30:09.256839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.676 qpair failed and we were unable to recover it. 00:34:22.676 [2024-10-28 15:30:09.256973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.676 [2024-10-28 15:30:09.257014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.676 qpair failed and we were unable to recover it. 00:34:22.676 [2024-10-28 15:30:09.257195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.676 [2024-10-28 15:30:09.257224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.676 qpair failed and we were unable to recover it. 00:34:22.676 [2024-10-28 15:30:09.257368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.676 [2024-10-28 15:30:09.257407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.676 qpair failed and we were unable to recover it. 00:34:22.676 [2024-10-28 15:30:09.257617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.676 [2024-10-28 15:30:09.257645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.676 qpair failed and we were unable to recover it. 00:34:22.676 [2024-10-28 15:30:09.257817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.676 [2024-10-28 15:30:09.257843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.676 qpair failed and we were unable to recover it. 00:34:22.676 [2024-10-28 15:30:09.257962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.676 [2024-10-28 15:30:09.257991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.676 qpair failed and we were unable to recover it. 00:34:22.676 [2024-10-28 15:30:09.258144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.676 [2024-10-28 15:30:09.258184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.676 qpair failed and we were unable to recover it. 00:34:22.676 [2024-10-28 15:30:09.258352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.676 [2024-10-28 15:30:09.258380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.676 qpair failed and we were unable to recover it. 00:34:22.676 [2024-10-28 15:30:09.258538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.676 [2024-10-28 15:30:09.258567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.676 qpair failed and we were unable to recover it. 00:34:22.676 [2024-10-28 15:30:09.258701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.676 [2024-10-28 15:30:09.258728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.676 qpair failed and we were unable to recover it. 00:34:22.676 [2024-10-28 15:30:09.258827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.676 [2024-10-28 15:30:09.258853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.676 qpair failed and we were unable to recover it. 00:34:22.676 [2024-10-28 15:30:09.259015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.676 [2024-10-28 15:30:09.259054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.676 qpair failed and we were unable to recover it. 00:34:22.676 [2024-10-28 15:30:09.259214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.676 [2024-10-28 15:30:09.259239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.676 qpair failed and we were unable to recover it. 00:34:22.676 [2024-10-28 15:30:09.259353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.676 [2024-10-28 15:30:09.259378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.676 qpair failed and we were unable to recover it. 00:34:22.676 [2024-10-28 15:30:09.259597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.676 [2024-10-28 15:30:09.259626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.676 qpair failed and we were unable to recover it. 00:34:22.676 [2024-10-28 15:30:09.259763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.676 [2024-10-28 15:30:09.259788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.676 qpair failed and we were unable to recover it. 00:34:22.676 [2024-10-28 15:30:09.259913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.676 [2024-10-28 15:30:09.259954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.676 qpair failed and we were unable to recover it. 00:34:22.676 [2024-10-28 15:30:09.260119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.676 [2024-10-28 15:30:09.260148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.676 qpair failed and we were unable to recover it. 00:34:22.676 [2024-10-28 15:30:09.260257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.676 [2024-10-28 15:30:09.260282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.676 qpair failed and we were unable to recover it. 00:34:22.677 [2024-10-28 15:30:09.260462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.677 [2024-10-28 15:30:09.260506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.677 qpair failed and we were unable to recover it. 00:34:22.677 [2024-10-28 15:30:09.260641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.677 [2024-10-28 15:30:09.260676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.677 qpair failed and we were unable to recover it. 00:34:22.677 [2024-10-28 15:30:09.260796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.677 [2024-10-28 15:30:09.260825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.677 qpair failed and we were unable to recover it. 00:34:22.677 [2024-10-28 15:30:09.260975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.677 [2024-10-28 15:30:09.261000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.677 qpair failed and we were unable to recover it. 00:34:22.677 [2024-10-28 15:30:09.261168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.677 [2024-10-28 15:30:09.261196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.677 qpair failed and we were unable to recover it. 00:34:22.677 [2024-10-28 15:30:09.261343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.677 [2024-10-28 15:30:09.261382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.677 qpair failed and we were unable to recover it. 00:34:22.677 [2024-10-28 15:30:09.261547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.677 [2024-10-28 15:30:09.261576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.677 qpair failed and we were unable to recover it. 00:34:22.677 [2024-10-28 15:30:09.261723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.677 [2024-10-28 15:30:09.261752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.677 qpair failed and we were unable to recover it. 00:34:22.677 [2024-10-28 15:30:09.261871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.677 [2024-10-28 15:30:09.261896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.677 qpair failed and we were unable to recover it. 00:34:22.677 [2024-10-28 15:30:09.261997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.677 [2024-10-28 15:30:09.262022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.677 qpair failed and we were unable to recover it. 00:34:22.677 [2024-10-28 15:30:09.262206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.677 [2024-10-28 15:30:09.262245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.677 qpair failed and we were unable to recover it. 00:34:22.677 [2024-10-28 15:30:09.262350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.677 [2024-10-28 15:30:09.262375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.677 qpair failed and we were unable to recover it. 00:34:22.677 [2024-10-28 15:30:09.262511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.677 [2024-10-28 15:30:09.262537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.677 qpair failed and we were unable to recover it. 00:34:22.677 [2024-10-28 15:30:09.262660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.677 [2024-10-28 15:30:09.262685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.677 qpair failed and we were unable to recover it. 00:34:22.677 [2024-10-28 15:30:09.262820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.677 [2024-10-28 15:30:09.262846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.677 qpair failed and we were unable to recover it. 00:34:22.677 [2024-10-28 15:30:09.262976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.677 [2024-10-28 15:30:09.263002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.677 qpair failed and we were unable to recover it. 00:34:22.677 [2024-10-28 15:30:09.263159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.677 [2024-10-28 15:30:09.263188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.677 qpair failed and we were unable to recover it. 00:34:22.677 [2024-10-28 15:30:09.263324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.677 [2024-10-28 15:30:09.263365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.677 qpair failed and we were unable to recover it. 00:34:22.677 [2024-10-28 15:30:09.263500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.677 [2024-10-28 15:30:09.263541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.677 qpair failed and we were unable to recover it. 00:34:22.677 [2024-10-28 15:30:09.263702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.677 [2024-10-28 15:30:09.263731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.677 qpair failed and we were unable to recover it. 00:34:22.677 [2024-10-28 15:30:09.263848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.677 [2024-10-28 15:30:09.263874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.677 qpair failed and we were unable to recover it. 00:34:22.677 [2024-10-28 15:30:09.264043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.677 [2024-10-28 15:30:09.264083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.677 qpair failed and we were unable to recover it. 00:34:22.677 [2024-10-28 15:30:09.264214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.677 [2024-10-28 15:30:09.264242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.677 qpair failed and we were unable to recover it. 00:34:22.677 [2024-10-28 15:30:09.264365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.677 [2024-10-28 15:30:09.264390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.677 qpair failed and we were unable to recover it. 00:34:22.677 [2024-10-28 15:30:09.264541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.677 [2024-10-28 15:30:09.264566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.678 qpair failed and we were unable to recover it. 00:34:22.678 [2024-10-28 15:30:09.264686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.678 [2024-10-28 15:30:09.264715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.678 qpair failed and we were unable to recover it. 00:34:22.678 [2024-10-28 15:30:09.264844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.678 [2024-10-28 15:30:09.264869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.678 qpair failed and we were unable to recover it. 00:34:22.678 [2024-10-28 15:30:09.264985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.678 [2024-10-28 15:30:09.265010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.678 qpair failed and we were unable to recover it. 00:34:22.678 [2024-10-28 15:30:09.265143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.678 [2024-10-28 15:30:09.265184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.678 qpair failed and we were unable to recover it. 00:34:22.678 [2024-10-28 15:30:09.265316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.678 [2024-10-28 15:30:09.265342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.678 qpair failed and we were unable to recover it. 00:34:22.678 [2024-10-28 15:30:09.265510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.678 [2024-10-28 15:30:09.265549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.678 qpair failed and we were unable to recover it. 00:34:22.678 [2024-10-28 15:30:09.265706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.678 [2024-10-28 15:30:09.265732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.678 qpair failed and we were unable to recover it. 00:34:22.678 [2024-10-28 15:30:09.265826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.678 [2024-10-28 15:30:09.265852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.678 qpair failed and we were unable to recover it. 00:34:22.678 [2024-10-28 15:30:09.265976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.678 [2024-10-28 15:30:09.266002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.678 qpair failed and we were unable to recover it. 00:34:22.678 [2024-10-28 15:30:09.266157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.678 [2024-10-28 15:30:09.266187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.678 qpair failed and we were unable to recover it. 00:34:22.678 [2024-10-28 15:30:09.266327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.678 [2024-10-28 15:30:09.266367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.678 qpair failed and we were unable to recover it. 00:34:22.678 [2024-10-28 15:30:09.266503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.678 [2024-10-28 15:30:09.266543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.678 qpair failed and we were unable to recover it. 00:34:22.678 [2024-10-28 15:30:09.266710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.678 [2024-10-28 15:30:09.266740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.678 qpair failed and we were unable to recover it. 00:34:22.678 [2024-10-28 15:30:09.266855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.678 [2024-10-28 15:30:09.266882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.678 qpair failed and we were unable to recover it. 00:34:22.678 [2024-10-28 15:30:09.267026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.678 [2024-10-28 15:30:09.267051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.678 qpair failed and we were unable to recover it. 00:34:22.678 [2024-10-28 15:30:09.267185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.678 [2024-10-28 15:30:09.267226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.678 qpair failed and we were unable to recover it. 00:34:22.678 [2024-10-28 15:30:09.267350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.678 [2024-10-28 15:30:09.267376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.678 qpair failed and we were unable to recover it. 00:34:22.678 [2024-10-28 15:30:09.267489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.678 [2024-10-28 15:30:09.267530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.678 qpair failed and we were unable to recover it. 00:34:22.678 [2024-10-28 15:30:09.267714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.678 [2024-10-28 15:30:09.267743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.678 qpair failed and we were unable to recover it. 00:34:22.678 [2024-10-28 15:30:09.267848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.678 [2024-10-28 15:30:09.267875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.678 qpair failed and we were unable to recover it. 00:34:22.678 [2024-10-28 15:30:09.267999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.678 [2024-10-28 15:30:09.268039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.678 qpair failed and we were unable to recover it. 00:34:22.678 [2024-10-28 15:30:09.268188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.678 [2024-10-28 15:30:09.268219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.678 qpair failed and we were unable to recover it. 00:34:22.678 [2024-10-28 15:30:09.268360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.678 [2024-10-28 15:30:09.268387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.678 qpair failed and we were unable to recover it. 00:34:22.678 [2024-10-28 15:30:09.268515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.678 [2024-10-28 15:30:09.268541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.678 qpair failed and we were unable to recover it. 00:34:22.678 [2024-10-28 15:30:09.268717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.678 [2024-10-28 15:30:09.268747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.678 qpair failed and we were unable to recover it. 00:34:22.678 [2024-10-28 15:30:09.268854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.678 [2024-10-28 15:30:09.268880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.678 qpair failed and we were unable to recover it. 00:34:22.678 [2024-10-28 15:30:09.269033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.678 [2024-10-28 15:30:09.269060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.678 qpair failed and we were unable to recover it. 00:34:22.678 [2024-10-28 15:30:09.269199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.678 [2024-10-28 15:30:09.269229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.678 qpair failed and we were unable to recover it. 00:34:22.678 [2024-10-28 15:30:09.269356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.678 [2024-10-28 15:30:09.269382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.678 qpair failed and we were unable to recover it. 00:34:22.678 [2024-10-28 15:30:09.269535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.678 [2024-10-28 15:30:09.269577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.678 qpair failed and we were unable to recover it. 00:34:22.678 [2024-10-28 15:30:09.269711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.679 [2024-10-28 15:30:09.269741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.679 qpair failed and we were unable to recover it. 00:34:22.679 [2024-10-28 15:30:09.269878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.679 [2024-10-28 15:30:09.269904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.679 qpair failed and we were unable to recover it. 00:34:22.679 [2024-10-28 15:30:09.270052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.679 [2024-10-28 15:30:09.270095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.679 qpair failed and we were unable to recover it. 00:34:22.679 [2024-10-28 15:30:09.270223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.679 [2024-10-28 15:30:09.270252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.679 qpair failed and we were unable to recover it. 00:34:22.679 [2024-10-28 15:30:09.270367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.679 [2024-10-28 15:30:09.270393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.679 qpair failed and we were unable to recover it. 00:34:22.679 [2024-10-28 15:30:09.270563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.679 [2024-10-28 15:30:09.270608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.679 qpair failed and we were unable to recover it. 00:34:22.679 [2024-10-28 15:30:09.270733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.679 [2024-10-28 15:30:09.270761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.679 qpair failed and we were unable to recover it. 00:34:22.679 [2024-10-28 15:30:09.270853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.679 [2024-10-28 15:30:09.270880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.679 qpair failed and we were unable to recover it. 00:34:22.679 [2024-10-28 15:30:09.270999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.679 [2024-10-28 15:30:09.271026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.679 qpair failed and we were unable to recover it. 00:34:22.679 [2024-10-28 15:30:09.271157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.679 [2024-10-28 15:30:09.271186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.679 qpair failed and we were unable to recover it. 00:34:22.679 [2024-10-28 15:30:09.271346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.679 [2024-10-28 15:30:09.271373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.679 qpair failed and we were unable to recover it. 00:34:22.679 [2024-10-28 15:30:09.271519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.679 [2024-10-28 15:30:09.271565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.679 qpair failed and we were unable to recover it. 00:34:22.679 [2024-10-28 15:30:09.271745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.679 [2024-10-28 15:30:09.271775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.679 qpair failed and we were unable to recover it. 00:34:22.679 [2024-10-28 15:30:09.271890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.679 [2024-10-28 15:30:09.271916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.679 qpair failed and we were unable to recover it. 00:34:22.679 [2024-10-28 15:30:09.272073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.679 [2024-10-28 15:30:09.272101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.679 qpair failed and we were unable to recover it. 00:34:22.679 [2024-10-28 15:30:09.272249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.679 [2024-10-28 15:30:09.272278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.679 qpair failed and we were unable to recover it. 00:34:22.679 [2024-10-28 15:30:09.272417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.679 [2024-10-28 15:30:09.272442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.679 qpair failed and we were unable to recover it. 00:34:22.679 [2024-10-28 15:30:09.272582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.679 [2024-10-28 15:30:09.272640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.679 qpair failed and we were unable to recover it. 00:34:22.679 [2024-10-28 15:30:09.272773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.679 [2024-10-28 15:30:09.272804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.679 qpair failed and we were unable to recover it. 00:34:22.679 [2024-10-28 15:30:09.272973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.679 [2024-10-28 15:30:09.273001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.679 qpair failed and we were unable to recover it. 00:34:22.679 [2024-10-28 15:30:09.273123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.679 [2024-10-28 15:30:09.273166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.679 qpair failed and we were unable to recover it. 00:34:22.679 [2024-10-28 15:30:09.273320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.679 [2024-10-28 15:30:09.273349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.679 qpair failed and we were unable to recover it. 00:34:22.679 [2024-10-28 15:30:09.273472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.679 [2024-10-28 15:30:09.273498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.679 qpair failed and we were unable to recover it. 00:34:22.679 [2024-10-28 15:30:09.273587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.679 [2024-10-28 15:30:09.273614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.679 qpair failed and we were unable to recover it. 00:34:22.679 [2024-10-28 15:30:09.273745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.679 [2024-10-28 15:30:09.273772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.679 qpair failed and we were unable to recover it. 00:34:22.679 [2024-10-28 15:30:09.273900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.679 [2024-10-28 15:30:09.273927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.679 qpair failed and we were unable to recover it. 00:34:22.679 [2024-10-28 15:30:09.274064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.679 [2024-10-28 15:30:09.274107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.679 qpair failed and we were unable to recover it. 00:34:22.679 [2024-10-28 15:30:09.274239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.679 [2024-10-28 15:30:09.274275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.679 qpair failed and we were unable to recover it. 00:34:22.679 [2024-10-28 15:30:09.274405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.679 [2024-10-28 15:30:09.274432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.679 qpair failed and we were unable to recover it. 00:34:22.679 [2024-10-28 15:30:09.274526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.679 [2024-10-28 15:30:09.274553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.679 qpair failed and we were unable to recover it. 00:34:22.679 [2024-10-28 15:30:09.274686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.680 [2024-10-28 15:30:09.274716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.680 qpair failed and we were unable to recover it. 00:34:22.680 [2024-10-28 15:30:09.274825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.680 [2024-10-28 15:30:09.274853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.680 qpair failed and we were unable to recover it. 00:34:22.680 [2024-10-28 15:30:09.275004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.680 [2024-10-28 15:30:09.275031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.680 qpair failed and we were unable to recover it. 00:34:22.680 [2024-10-28 15:30:09.275193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.680 [2024-10-28 15:30:09.275222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.680 qpair failed and we were unable to recover it. 00:34:22.680 [2024-10-28 15:30:09.275397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.680 [2024-10-28 15:30:09.275422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.680 qpair failed and we were unable to recover it. 00:34:22.680 [2024-10-28 15:30:09.275599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.680 [2024-10-28 15:30:09.275627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.680 qpair failed and we were unable to recover it. 00:34:22.680 [2024-10-28 15:30:09.275746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.680 [2024-10-28 15:30:09.275772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.680 qpair failed and we were unable to recover it. 00:34:22.680 [2024-10-28 15:30:09.275868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.680 [2024-10-28 15:30:09.275893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.680 qpair failed and we were unable to recover it. 00:34:22.680 [2024-10-28 15:30:09.276012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.680 [2024-10-28 15:30:09.276037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.680 qpair failed and we were unable to recover it. 00:34:22.680 [2024-10-28 15:30:09.276145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.680 [2024-10-28 15:30:09.276173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.680 qpair failed and we were unable to recover it. 00:34:22.680 [2024-10-28 15:30:09.276324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.680 [2024-10-28 15:30:09.276349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.680 qpair failed and we were unable to recover it. 00:34:22.680 [2024-10-28 15:30:09.276544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.680 [2024-10-28 15:30:09.276574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.680 qpair failed and we were unable to recover it. 00:34:22.680 [2024-10-28 15:30:09.276731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.680 [2024-10-28 15:30:09.276761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.680 qpair failed and we were unable to recover it. 00:34:22.680 [2024-10-28 15:30:09.276871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.680 [2024-10-28 15:30:09.276896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.680 qpair failed and we were unable to recover it. 00:34:22.680 [2024-10-28 15:30:09.277044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.680 [2024-10-28 15:30:09.277069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.680 qpair failed and we were unable to recover it. 00:34:22.680 [2024-10-28 15:30:09.277232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.680 [2024-10-28 15:30:09.277261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.680 qpair failed and we were unable to recover it. 00:34:22.680 [2024-10-28 15:30:09.277390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.680 [2024-10-28 15:30:09.277430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.680 qpair failed and we were unable to recover it. 00:34:22.680 [2024-10-28 15:30:09.277572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.680 [2024-10-28 15:30:09.277613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.680 qpair failed and we were unable to recover it. 00:34:22.680 [2024-10-28 15:30:09.277759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.680 [2024-10-28 15:30:09.277786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.680 qpair failed and we were unable to recover it. 00:34:22.680 [2024-10-28 15:30:09.277877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.680 [2024-10-28 15:30:09.277902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.680 qpair failed and we were unable to recover it. 00:34:22.680 [2024-10-28 15:30:09.278081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.680 [2024-10-28 15:30:09.278125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.680 qpair failed and we were unable to recover it. 00:34:22.680 [2024-10-28 15:30:09.278237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.680 [2024-10-28 15:30:09.278265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.680 qpair failed and we were unable to recover it. 00:34:22.680 [2024-10-28 15:30:09.278393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.680 [2024-10-28 15:30:09.278418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.680 qpair failed and we were unable to recover it. 00:34:22.680 [2024-10-28 15:30:09.278623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.680 [2024-10-28 15:30:09.278660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.680 qpair failed and we were unable to recover it. 00:34:22.680 [2024-10-28 15:30:09.278788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.680 [2024-10-28 15:30:09.278819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.680 qpair failed and we were unable to recover it. 00:34:22.680 [2024-10-28 15:30:09.278954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.680 [2024-10-28 15:30:09.278978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.680 qpair failed and we were unable to recover it. 00:34:22.680 [2024-10-28 15:30:09.279112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.680 [2024-10-28 15:30:09.279155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.680 qpair failed and we were unable to recover it. 00:34:22.680 [2024-10-28 15:30:09.279311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.680 [2024-10-28 15:30:09.279340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.680 qpair failed and we were unable to recover it. 00:34:22.680 [2024-10-28 15:30:09.279451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.680 [2024-10-28 15:30:09.279475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.680 qpair failed and we were unable to recover it. 00:34:22.680 [2024-10-28 15:30:09.279658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.680 [2024-10-28 15:30:09.279686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.680 qpair failed and we were unable to recover it. 00:34:22.680 [2024-10-28 15:30:09.279810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.680 [2024-10-28 15:30:09.279840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.680 qpair failed and we were unable to recover it. 00:34:22.680 [2024-10-28 15:30:09.280011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.681 [2024-10-28 15:30:09.280036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.681 qpair failed and we were unable to recover it. 00:34:22.681 [2024-10-28 15:30:09.280162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.681 [2024-10-28 15:30:09.280205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.681 qpair failed and we were unable to recover it. 00:34:22.681 [2024-10-28 15:30:09.280302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.681 [2024-10-28 15:30:09.280331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.681 qpair failed and we were unable to recover it. 00:34:22.681 [2024-10-28 15:30:09.280482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.681 [2024-10-28 15:30:09.280507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.681 qpair failed and we were unable to recover it. 00:34:22.681 [2024-10-28 15:30:09.280657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.681 [2024-10-28 15:30:09.280684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.681 qpair failed and we were unable to recover it. 00:34:22.681 [2024-10-28 15:30:09.280796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.681 [2024-10-28 15:30:09.280825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.681 qpair failed and we were unable to recover it. 00:34:22.681 [2024-10-28 15:30:09.280993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.681 [2024-10-28 15:30:09.281017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.681 qpair failed and we were unable to recover it. 00:34:22.681 [2024-10-28 15:30:09.281195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.681 [2024-10-28 15:30:09.281224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.681 qpair failed and we were unable to recover it. 00:34:22.681 [2024-10-28 15:30:09.281329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.681 [2024-10-28 15:30:09.281358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.681 qpair failed and we were unable to recover it. 00:34:22.681 [2024-10-28 15:30:09.281489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.681 [2024-10-28 15:30:09.281513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.681 qpair failed and we were unable to recover it. 00:34:22.681 [2024-10-28 15:30:09.281688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.681 [2024-10-28 15:30:09.281716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.681 qpair failed and we were unable to recover it. 00:34:22.681 [2024-10-28 15:30:09.281825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.681 [2024-10-28 15:30:09.281854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.681 qpair failed and we were unable to recover it. 00:34:22.681 [2024-10-28 15:30:09.282015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.681 [2024-10-28 15:30:09.282054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.681 qpair failed and we were unable to recover it. 00:34:22.681 [2024-10-28 15:30:09.282215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.681 [2024-10-28 15:30:09.282244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.681 qpair failed and we were unable to recover it. 00:34:22.681 [2024-10-28 15:30:09.282393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.681 [2024-10-28 15:30:09.282423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.681 qpair failed and we were unable to recover it. 00:34:22.681 [2024-10-28 15:30:09.282541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.681 [2024-10-28 15:30:09.282565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.681 qpair failed and we were unable to recover it. 00:34:22.681 [2024-10-28 15:30:09.282727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.681 [2024-10-28 15:30:09.282752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.681 qpair failed and we were unable to recover it. 00:34:22.681 [2024-10-28 15:30:09.282898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.681 [2024-10-28 15:30:09.282927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.681 qpair failed and we were unable to recover it. 00:34:22.681 [2024-10-28 15:30:09.283050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.681 [2024-10-28 15:30:09.283089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.681 qpair failed and we were unable to recover it. 00:34:22.681 [2024-10-28 15:30:09.283241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.681 [2024-10-28 15:30:09.283293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.681 qpair failed and we were unable to recover it. 00:34:22.681 [2024-10-28 15:30:09.283436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.681 [2024-10-28 15:30:09.283465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.681 qpair failed and we were unable to recover it. 00:34:22.681 [2024-10-28 15:30:09.283577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.681 [2024-10-28 15:30:09.283603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.681 qpair failed and we were unable to recover it. 00:34:22.681 [2024-10-28 15:30:09.283758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.681 [2024-10-28 15:30:09.283783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.681 qpair failed and we were unable to recover it. 00:34:22.681 [2024-10-28 15:30:09.283901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.681 [2024-10-28 15:30:09.283942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.681 qpair failed and we were unable to recover it. 00:34:22.681 [2024-10-28 15:30:09.284056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.681 [2024-10-28 15:30:09.284081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.681 qpair failed and we were unable to recover it. 00:34:22.681 [2024-10-28 15:30:09.284217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.681 [2024-10-28 15:30:09.284243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.681 qpair failed and we were unable to recover it. 00:34:22.681 [2024-10-28 15:30:09.284383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.681 [2024-10-28 15:30:09.284411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.681 qpair failed and we were unable to recover it. 00:34:22.681 [2024-10-28 15:30:09.284531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.681 [2024-10-28 15:30:09.284557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.681 qpair failed and we were unable to recover it. 00:34:22.681 [2024-10-28 15:30:09.284713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.681 [2024-10-28 15:30:09.284771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.681 qpair failed and we were unable to recover it. 00:34:22.681 [2024-10-28 15:30:09.284919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.681 [2024-10-28 15:30:09.284951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.681 qpair failed and we were unable to recover it. 00:34:22.681 [2024-10-28 15:30:09.285086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.681 [2024-10-28 15:30:09.285127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.681 qpair failed and we were unable to recover it. 00:34:22.682 [2024-10-28 15:30:09.285279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.682 [2024-10-28 15:30:09.285323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.682 qpair failed and we were unable to recover it. 00:34:22.682 [2024-10-28 15:30:09.285450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.682 [2024-10-28 15:30:09.285480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.682 qpair failed and we were unable to recover it. 00:34:22.682 [2024-10-28 15:30:09.285622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.682 [2024-10-28 15:30:09.285662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.682 qpair failed and we were unable to recover it. 00:34:22.682 [2024-10-28 15:30:09.285791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.682 [2024-10-28 15:30:09.285834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.682 qpair failed and we were unable to recover it. 00:34:22.682 [2024-10-28 15:30:09.285956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.682 [2024-10-28 15:30:09.285984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.682 qpair failed and we were unable to recover it. 00:34:22.682 [2024-10-28 15:30:09.286139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.682 [2024-10-28 15:30:09.286164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.682 qpair failed and we were unable to recover it. 00:34:22.682 [2024-10-28 15:30:09.286286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.682 [2024-10-28 15:30:09.286313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.682 qpair failed and we were unable to recover it. 00:34:22.682 [2024-10-28 15:30:09.286467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.682 [2024-10-28 15:30:09.286495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.682 qpair failed and we were unable to recover it. 00:34:22.682 [2024-10-28 15:30:09.286675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.682 [2024-10-28 15:30:09.286717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.682 qpair failed and we were unable to recover it. 00:34:22.682 [2024-10-28 15:30:09.286813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.682 [2024-10-28 15:30:09.286839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.682 qpair failed and we were unable to recover it. 00:34:22.682 [2024-10-28 15:30:09.286938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.682 [2024-10-28 15:30:09.286966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.682 qpair failed and we were unable to recover it. 00:34:22.682 [2024-10-28 15:30:09.287125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.682 [2024-10-28 15:30:09.287152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.682 qpair failed and we were unable to recover it. 00:34:22.682 [2024-10-28 15:30:09.287333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.682 [2024-10-28 15:30:09.287364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.682 qpair failed and we were unable to recover it. 00:34:22.682 [2024-10-28 15:30:09.287540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.682 [2024-10-28 15:30:09.287584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.682 qpair failed and we were unable to recover it. 00:34:22.682 [2024-10-28 15:30:09.287710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.682 [2024-10-28 15:30:09.287739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.682 qpair failed and we were unable to recover it. 00:34:22.682 [2024-10-28 15:30:09.287834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.682 [2024-10-28 15:30:09.287861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.682 qpair failed and we were unable to recover it. 00:34:22.682 [2024-10-28 15:30:09.288031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.682 [2024-10-28 15:30:09.288060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.682 qpair failed and we were unable to recover it. 00:34:22.682 [2024-10-28 15:30:09.288189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.682 [2024-10-28 15:30:09.288215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.682 qpair failed and we were unable to recover it. 00:34:22.682 [2024-10-28 15:30:09.288319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.682 [2024-10-28 15:30:09.288346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.682 qpair failed and we were unable to recover it. 00:34:22.682 [2024-10-28 15:30:09.288518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.682 [2024-10-28 15:30:09.288549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.682 qpair failed and we were unable to recover it. 00:34:22.682 [2024-10-28 15:30:09.288662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.682 [2024-10-28 15:30:09.288689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.682 qpair failed and we were unable to recover it. 00:34:22.682 [2024-10-28 15:30:09.288783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.682 [2024-10-28 15:30:09.288809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.682 qpair failed and we were unable to recover it. 00:34:22.682 [2024-10-28 15:30:09.288944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.682 [2024-10-28 15:30:09.288973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.682 qpair failed and we were unable to recover it. 00:34:22.682 [2024-10-28 15:30:09.289126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.682 [2024-10-28 15:30:09.289153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.682 qpair failed and we were unable to recover it. 00:34:22.682 [2024-10-28 15:30:09.289254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.682 [2024-10-28 15:30:09.289280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.682 qpair failed and we were unable to recover it. 00:34:22.682 [2024-10-28 15:30:09.289419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.682 [2024-10-28 15:30:09.289449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.682 qpair failed and we were unable to recover it. 00:34:22.682 [2024-10-28 15:30:09.289567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.682 [2024-10-28 15:30:09.289592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.682 qpair failed and we were unable to recover it. 00:34:22.682 [2024-10-28 15:30:09.289719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.682 [2024-10-28 15:30:09.289746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.682 qpair failed and we were unable to recover it. 00:34:22.682 [2024-10-28 15:30:09.289852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.682 [2024-10-28 15:30:09.289880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.682 qpair failed and we were unable to recover it. 00:34:22.682 [2024-10-28 15:30:09.290028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.682 [2024-10-28 15:30:09.290054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.683 qpair failed and we were unable to recover it. 00:34:22.683 [2024-10-28 15:30:09.290238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.683 [2024-10-28 15:30:09.290267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.683 qpair failed and we were unable to recover it. 00:34:22.683 [2024-10-28 15:30:09.290422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.683 [2024-10-28 15:30:09.290451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.683 qpair failed and we were unable to recover it. 00:34:22.683 [2024-10-28 15:30:09.290605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.683 [2024-10-28 15:30:09.290630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:22.683 qpair failed and we were unable to recover it. 00:34:22.683 [2024-10-28 15:30:09.290750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.683 [2024-10-28 15:30:09.290789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.683 qpair failed and we were unable to recover it. 00:34:22.683 [2024-10-28 15:30:09.290893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.683 [2024-10-28 15:30:09.290920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.683 qpair failed and we were unable to recover it. 00:34:22.683 [2024-10-28 15:30:09.291078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.683 [2024-10-28 15:30:09.291105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.683 qpair failed and we were unable to recover it. 00:34:22.683 [2024-10-28 15:30:09.291207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.683 [2024-10-28 15:30:09.291233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.683 qpair failed and we were unable to recover it. 00:34:22.683 [2024-10-28 15:30:09.291376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.683 [2024-10-28 15:30:09.291405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.683 qpair failed and we were unable to recover it. 00:34:22.683 [2024-10-28 15:30:09.291557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.683 [2024-10-28 15:30:09.291584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.683 qpair failed and we were unable to recover it. 00:34:22.683 [2024-10-28 15:30:09.291683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.683 [2024-10-28 15:30:09.291710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.683 qpair failed and we were unable to recover it. 00:34:22.683 [2024-10-28 15:30:09.291837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.683 [2024-10-28 15:30:09.291863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.683 qpair failed and we were unable to recover it. 00:34:22.683 [2024-10-28 15:30:09.292040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.683 [2024-10-28 15:30:09.292065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.683 qpair failed and we were unable to recover it. 00:34:22.683 [2024-10-28 15:30:09.292190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.683 [2024-10-28 15:30:09.292220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.683 qpair failed and we were unable to recover it. 00:34:22.683 [2024-10-28 15:30:09.292372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.683 [2024-10-28 15:30:09.292401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.683 qpair failed and we were unable to recover it. 00:34:22.683 [2024-10-28 15:30:09.292537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.683 [2024-10-28 15:30:09.292563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.683 qpair failed and we were unable to recover it. 00:34:22.683 [2024-10-28 15:30:09.292694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.683 [2024-10-28 15:30:09.292720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.683 qpair failed and we were unable to recover it. 00:34:22.683 [2024-10-28 15:30:09.292811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.683 [2024-10-28 15:30:09.292837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.683 qpair failed and we were unable to recover it. 00:34:22.683 [2024-10-28 15:30:09.292962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.683 [2024-10-28 15:30:09.292988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.683 qpair failed and we were unable to recover it. 00:34:22.683 [2024-10-28 15:30:09.293127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.683 [2024-10-28 15:30:09.293167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.683 qpair failed and we were unable to recover it. 00:34:22.684 [2024-10-28 15:30:09.293298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.684 [2024-10-28 15:30:09.293327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.684 qpair failed and we were unable to recover it. 00:34:22.684 [2024-10-28 15:30:09.293447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.684 [2024-10-28 15:30:09.293491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.684 qpair failed and we were unable to recover it. 00:34:22.684 [2024-10-28 15:30:09.293591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.684 [2024-10-28 15:30:09.293629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.684 qpair failed and we were unable to recover it. 00:34:22.684 [2024-10-28 15:30:09.293760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.684 [2024-10-28 15:30:09.293786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.684 qpair failed and we were unable to recover it. 00:34:22.684 [2024-10-28 15:30:09.293880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.684 [2024-10-28 15:30:09.293907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.684 qpair failed and we were unable to recover it. 00:34:22.684 [2024-10-28 15:30:09.294041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.684 [2024-10-28 15:30:09.294066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.684 qpair failed and we were unable to recover it. 00:34:22.684 [2024-10-28 15:30:09.294237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.684 [2024-10-28 15:30:09.294266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.684 qpair failed and we were unable to recover it. 00:34:22.684 [2024-10-28 15:30:09.294401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.684 [2024-10-28 15:30:09.294427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.684 qpair failed and we were unable to recover it. 00:34:22.684 [2024-10-28 15:30:09.294564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.684 [2024-10-28 15:30:09.294589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.684 qpair failed and we were unable to recover it. 00:34:22.684 [2024-10-28 15:30:09.294725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.684 [2024-10-28 15:30:09.294754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.684 qpair failed and we were unable to recover it. 00:34:22.684 [2024-10-28 15:30:09.294867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.684 [2024-10-28 15:30:09.294893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.684 qpair failed and we were unable to recover it. 00:34:22.684 [2024-10-28 15:30:09.295028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.684 [2024-10-28 15:30:09.295053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.684 qpair failed and we were unable to recover it. 00:34:22.684 [2024-10-28 15:30:09.295191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.684 [2024-10-28 15:30:09.295220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.684 qpair failed and we were unable to recover it. 00:34:22.684 [2024-10-28 15:30:09.295350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.684 [2024-10-28 15:30:09.295375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.684 qpair failed and we were unable to recover it. 00:34:22.684 [2024-10-28 15:30:09.295492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.684 [2024-10-28 15:30:09.295518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.684 qpair failed and we were unable to recover it. 00:34:22.684 [2024-10-28 15:30:09.295626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.684 [2024-10-28 15:30:09.295663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.684 qpair failed and we were unable to recover it. 00:34:22.684 [2024-10-28 15:30:09.295776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.684 [2024-10-28 15:30:09.295802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.684 qpair failed and we were unable to recover it. 00:34:22.684 [2024-10-28 15:30:09.295899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.684 [2024-10-28 15:30:09.295940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.684 qpair failed and we were unable to recover it. 00:34:22.684 [2024-10-28 15:30:09.296058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.684 [2024-10-28 15:30:09.296087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.684 qpair failed and we were unable to recover it. 00:34:22.684 [2024-10-28 15:30:09.296212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.684 [2024-10-28 15:30:09.296236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.684 qpair failed and we were unable to recover it. 00:34:22.684 [2024-10-28 15:30:09.296395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.684 [2024-10-28 15:30:09.296450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.684 qpair failed and we were unable to recover it. 00:34:22.684 [2024-10-28 15:30:09.296552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.684 [2024-10-28 15:30:09.296581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.684 qpair failed and we were unable to recover it. 00:34:22.684 [2024-10-28 15:30:09.296719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.684 [2024-10-28 15:30:09.296746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.684 qpair failed and we were unable to recover it. 00:34:22.684 [2024-10-28 15:30:09.296848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.684 [2024-10-28 15:30:09.296874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.684 qpair failed and we were unable to recover it. 00:34:22.684 [2024-10-28 15:30:09.296977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.684 [2024-10-28 15:30:09.297005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.684 qpair failed and we were unable to recover it. 00:34:22.684 [2024-10-28 15:30:09.297208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.684 [2024-10-28 15:30:09.297232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.684 qpair failed and we were unable to recover it. 00:34:22.684 [2024-10-28 15:30:09.297384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.684 [2024-10-28 15:30:09.297413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.684 qpair failed and we were unable to recover it. 00:34:22.684 [2024-10-28 15:30:09.297559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.684 [2024-10-28 15:30:09.297587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.684 qpair failed and we were unable to recover it. 00:34:22.684 [2024-10-28 15:30:09.297704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.684 [2024-10-28 15:30:09.297730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.684 qpair failed and we were unable to recover it. 00:34:22.684 [2024-10-28 15:30:09.297849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.684 [2024-10-28 15:30:09.297875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.684 qpair failed and we were unable to recover it. 00:34:22.684 [2024-10-28 15:30:09.297981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.298010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.298162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.298201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.298358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.298387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.298552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.298586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.298710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.298736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.298836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.298863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.298991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.299032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.299212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.299236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.299403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.299431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.299552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.299581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.299727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.299754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.299854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.299880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.299973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.300002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.300119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.300144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.300342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.300371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.300554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.300583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.300693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.300719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.300825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.300851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.301003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.301049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.301223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.301250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.301453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.301482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.301628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.301669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.301796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.301822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.302022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.302050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.302231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.302271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.302397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.302422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.302532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.302558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.302699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.302742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.302843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.302868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.303007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.303032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.303141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.303185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.303344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.303368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.685 [2024-10-28 15:30:09.303503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.685 [2024-10-28 15:30:09.303545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.685 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.303682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.686 [2024-10-28 15:30:09.303711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.686 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.303822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.686 [2024-10-28 15:30:09.303848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.686 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.304007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.686 [2024-10-28 15:30:09.304032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.686 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.304192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.686 [2024-10-28 15:30:09.304220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.686 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.304355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.686 [2024-10-28 15:30:09.304380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.686 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.304516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.686 [2024-10-28 15:30:09.304540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.686 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.304656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.686 [2024-10-28 15:30:09.304685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.686 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.304808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.686 [2024-10-28 15:30:09.304834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.686 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.304931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.686 [2024-10-28 15:30:09.304957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.686 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.305121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.686 [2024-10-28 15:30:09.305149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.686 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.305279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.686 [2024-10-28 15:30:09.305304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.686 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.305463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.686 [2024-10-28 15:30:09.305504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.686 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.305629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.686 [2024-10-28 15:30:09.305666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.686 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.305817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.686 [2024-10-28 15:30:09.305843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.686 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.305981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.686 [2024-10-28 15:30:09.306006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.686 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.306187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.686 [2024-10-28 15:30:09.306215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.686 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.306375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.686 [2024-10-28 15:30:09.306400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.686 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.306573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.686 [2024-10-28 15:30:09.306597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.686 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.306731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.686 [2024-10-28 15:30:09.306761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.686 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.306903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.686 [2024-10-28 15:30:09.306942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.686 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.307075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.686 [2024-10-28 15:30:09.307117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.686 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.307243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.686 [2024-10-28 15:30:09.307271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.686 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.307400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.686 [2024-10-28 15:30:09.307424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.686 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.307565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.686 [2024-10-28 15:30:09.307590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.686 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.307751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.686 [2024-10-28 15:30:09.307796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.686 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.307942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.686 [2024-10-28 15:30:09.307976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.686 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.308121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.686 [2024-10-28 15:30:09.308151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.686 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.308288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.686 [2024-10-28 15:30:09.308317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.686 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.308469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.686 [2024-10-28 15:30:09.308493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.686 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.308630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.686 [2024-10-28 15:30:09.308677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.686 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.308846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.686 [2024-10-28 15:30:09.308876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.686 qpair failed and we were unable to recover it. 00:34:22.686 [2024-10-28 15:30:09.309040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.687 [2024-10-28 15:30:09.309065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.687 qpair failed and we were unable to recover it. 00:34:22.687 [2024-10-28 15:30:09.309243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.687 [2024-10-28 15:30:09.309271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.687 qpair failed and we were unable to recover it. 00:34:22.687 [2024-10-28 15:30:09.309456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.687 [2024-10-28 15:30:09.309484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.687 qpair failed and we were unable to recover it. 00:34:22.687 [2024-10-28 15:30:09.309609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.687 [2024-10-28 15:30:09.309633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.687 qpair failed and we were unable to recover it. 00:34:22.687 [2024-10-28 15:30:09.309778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.687 [2024-10-28 15:30:09.309804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.687 qpair failed and we were unable to recover it. 00:34:22.687 [2024-10-28 15:30:09.309956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.687 [2024-10-28 15:30:09.309984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.687 qpair failed and we were unable to recover it. 00:34:22.687 [2024-10-28 15:30:09.310118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.687 [2024-10-28 15:30:09.310143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.687 qpair failed and we were unable to recover it. 00:34:22.687 [2024-10-28 15:30:09.310348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.687 [2024-10-28 15:30:09.310376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.687 qpair failed and we were unable to recover it. 00:34:22.687 [2024-10-28 15:30:09.310527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.687 [2024-10-28 15:30:09.310555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.687 qpair failed and we were unable to recover it. 00:34:22.687 [2024-10-28 15:30:09.310710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.687 [2024-10-28 15:30:09.310736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.687 qpair failed and we were unable to recover it. 00:34:22.687 [2024-10-28 15:30:09.310838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.687 [2024-10-28 15:30:09.310864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.687 qpair failed and we were unable to recover it. 00:34:22.687 [2024-10-28 15:30:09.311044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.687 [2024-10-28 15:30:09.311072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.687 qpair failed and we were unable to recover it. 00:34:22.687 [2024-10-28 15:30:09.311238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.687 [2024-10-28 15:30:09.311263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.687 qpair failed and we were unable to recover it. 00:34:22.687 [2024-10-28 15:30:09.311516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.687 [2024-10-28 15:30:09.311544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.687 qpair failed and we were unable to recover it. 00:34:22.687 [2024-10-28 15:30:09.311729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.687 [2024-10-28 15:30:09.311757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.687 qpair failed and we were unable to recover it. 00:34:22.687 [2024-10-28 15:30:09.311884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.687 [2024-10-28 15:30:09.311910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.687 qpair failed and we were unable to recover it. 00:34:22.687 [2024-10-28 15:30:09.312031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.687 [2024-10-28 15:30:09.312056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.687 qpair failed and we were unable to recover it. 00:34:22.687 [2024-10-28 15:30:09.312190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.687 [2024-10-28 15:30:09.312218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.687 qpair failed and we were unable to recover it. 00:34:22.687 [2024-10-28 15:30:09.312377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.687 [2024-10-28 15:30:09.312401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.687 qpair failed and we were unable to recover it. 00:34:22.687 [2024-10-28 15:30:09.312537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.687 [2024-10-28 15:30:09.312562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.687 qpair failed and we were unable to recover it. 00:34:22.687 [2024-10-28 15:30:09.312719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.687 [2024-10-28 15:30:09.312750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.687 qpair failed and we were unable to recover it. 00:34:22.687 [2024-10-28 15:30:09.312849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.687 [2024-10-28 15:30:09.312875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.687 qpair failed and we were unable to recover it. 00:34:22.687 [2024-10-28 15:30:09.312992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.687 [2024-10-28 15:30:09.313018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.687 qpair failed and we were unable to recover it. 00:34:22.687 [2024-10-28 15:30:09.313157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.687 [2024-10-28 15:30:09.313196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.687 qpair failed and we were unable to recover it. 00:34:22.687 [2024-10-28 15:30:09.313334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.687 [2024-10-28 15:30:09.313358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.687 qpair failed and we were unable to recover it. 00:34:22.687 [2024-10-28 15:30:09.313564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.687 [2024-10-28 15:30:09.313592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.687 qpair failed and we were unable to recover it. 00:34:22.687 [2024-10-28 15:30:09.313723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.687 [2024-10-28 15:30:09.313752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.687 qpair failed and we were unable to recover it. 00:34:22.687 [2024-10-28 15:30:09.313885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.687 [2024-10-28 15:30:09.313910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.687 qpair failed and we were unable to recover it. 00:34:22.687 [2024-10-28 15:30:09.314113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.687 [2024-10-28 15:30:09.314141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.687 qpair failed and we were unable to recover it. 00:34:22.687 [2024-10-28 15:30:09.314293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.687 [2024-10-28 15:30:09.314322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.687 qpair failed and we were unable to recover it. 00:34:22.687 [2024-10-28 15:30:09.314472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.687 [2024-10-28 15:30:09.314496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.687 qpair failed and we were unable to recover it. 00:34:22.687 [2024-10-28 15:30:09.314690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.688 [2024-10-28 15:30:09.314720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.688 qpair failed and we were unable to recover it. 00:34:22.688 [2024-10-28 15:30:09.314847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.688 [2024-10-28 15:30:09.314876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.688 qpair failed and we were unable to recover it. 00:34:22.688 [2024-10-28 15:30:09.315039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.688 [2024-10-28 15:30:09.315063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.688 qpair failed and we were unable to recover it. 00:34:22.688 [2024-10-28 15:30:09.315244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.688 [2024-10-28 15:30:09.315272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.688 qpair failed and we were unable to recover it. 00:34:22.688 [2024-10-28 15:30:09.315443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.688 [2024-10-28 15:30:09.315471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.688 qpair failed and we were unable to recover it. 00:34:22.688 [2024-10-28 15:30:09.315607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.688 [2024-10-28 15:30:09.315647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.688 qpair failed and we were unable to recover it. 00:34:22.688 [2024-10-28 15:30:09.315759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.688 [2024-10-28 15:30:09.315784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.688 qpair failed and we were unable to recover it. 00:34:22.688 [2024-10-28 15:30:09.315913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.688 [2024-10-28 15:30:09.315941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.688 qpair failed and we were unable to recover it. 00:34:22.688 [2024-10-28 15:30:09.316152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.688 [2024-10-28 15:30:09.316176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.688 qpair failed and we were unable to recover it. 00:34:22.688 [2024-10-28 15:30:09.316360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.688 [2024-10-28 15:30:09.316396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.688 qpair failed and we were unable to recover it. 00:34:22.688 [2024-10-28 15:30:09.316579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.688 [2024-10-28 15:30:09.316607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.688 qpair failed and we were unable to recover it. 00:34:22.688 [2024-10-28 15:30:09.316752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.688 [2024-10-28 15:30:09.316779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.688 qpair failed and we were unable to recover it. 00:34:22.688 [2024-10-28 15:30:09.316876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.688 [2024-10-28 15:30:09.316902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.688 qpair failed and we were unable to recover it. 00:34:22.688 [2024-10-28 15:30:09.317076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.688 [2024-10-28 15:30:09.317105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.688 qpair failed and we were unable to recover it. 00:34:22.688 [2024-10-28 15:30:09.317234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.688 [2024-10-28 15:30:09.317259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.688 qpair failed and we were unable to recover it. 00:34:22.688 [2024-10-28 15:30:09.317358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.688 [2024-10-28 15:30:09.317382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.688 qpair failed and we were unable to recover it. 00:34:22.688 [2024-10-28 15:30:09.317514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.688 [2024-10-28 15:30:09.317542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.688 qpair failed and we were unable to recover it. 00:34:22.688 [2024-10-28 15:30:09.317693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.688 [2024-10-28 15:30:09.317720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.688 qpair failed and we were unable to recover it. 00:34:22.688 [2024-10-28 15:30:09.317845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.688 [2024-10-28 15:30:09.317871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.688 qpair failed and we were unable to recover it. 00:34:22.688 [2024-10-28 15:30:09.317992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.688 [2024-10-28 15:30:09.318021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.688 qpair failed and we were unable to recover it. 00:34:22.688 [2024-10-28 15:30:09.318182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.688 [2024-10-28 15:30:09.318207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.688 qpair failed and we were unable to recover it. 00:34:22.688 [2024-10-28 15:30:09.318343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.688 [2024-10-28 15:30:09.318385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.688 qpair failed and we were unable to recover it. 00:34:22.688 [2024-10-28 15:30:09.318542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.688 [2024-10-28 15:30:09.318570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.688 qpair failed and we were unable to recover it. 00:34:22.688 [2024-10-28 15:30:09.318731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.688 [2024-10-28 15:30:09.318758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.688 qpair failed and we were unable to recover it. 00:34:22.688 [2024-10-28 15:30:09.318856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.688 [2024-10-28 15:30:09.318881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.688 qpair failed and we were unable to recover it. 00:34:22.688 [2024-10-28 15:30:09.319010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.688 [2024-10-28 15:30:09.319038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.688 qpair failed and we were unable to recover it. 00:34:22.688 [2024-10-28 15:30:09.319196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.688 [2024-10-28 15:30:09.319221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.688 qpair failed and we were unable to recover it. 00:34:22.688 [2024-10-28 15:30:09.319398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.688 [2024-10-28 15:30:09.319426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.688 qpair failed and we were unable to recover it. 00:34:22.688 [2024-10-28 15:30:09.319525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.689 [2024-10-28 15:30:09.319553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.689 qpair failed and we were unable to recover it. 00:34:22.689 [2024-10-28 15:30:09.319663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.689 [2024-10-28 15:30:09.319703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.689 qpair failed and we were unable to recover it. 00:34:22.689 [2024-10-28 15:30:09.319807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.689 [2024-10-28 15:30:09.319832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.689 qpair failed and we were unable to recover it. 00:34:22.689 [2024-10-28 15:30:09.319976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.689 [2024-10-28 15:30:09.320010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.689 qpair failed and we were unable to recover it. 00:34:22.689 [2024-10-28 15:30:09.320117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.689 [2024-10-28 15:30:09.320142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.689 qpair failed and we were unable to recover it. 00:34:22.689 [2024-10-28 15:30:09.320361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.689 [2024-10-28 15:30:09.320390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.689 qpair failed and we were unable to recover it. 00:34:22.689 [2024-10-28 15:30:09.320509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.689 [2024-10-28 15:30:09.320553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.689 qpair failed and we were unable to recover it. 00:34:22.689 [2024-10-28 15:30:09.320685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.689 [2024-10-28 15:30:09.320733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.689 qpair failed and we were unable to recover it. 00:34:22.689 [2024-10-28 15:30:09.320838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.689 [2024-10-28 15:30:09.320870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.689 qpair failed and we were unable to recover it. 00:34:22.689 [2024-10-28 15:30:09.321076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.689 [2024-10-28 15:30:09.321106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.689 qpair failed and we were unable to recover it. 00:34:22.689 [2024-10-28 15:30:09.321294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.689 [2024-10-28 15:30:09.321319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.689 qpair failed and we were unable to recover it. 00:34:22.689 [2024-10-28 15:30:09.321494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.689 [2024-10-28 15:30:09.321524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.689 qpair failed and we were unable to recover it. 00:34:22.689 [2024-10-28 15:30:09.321674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.689 [2024-10-28 15:30:09.321705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.689 qpair failed and we were unable to recover it. 00:34:22.689 [2024-10-28 15:30:09.321828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.689 [2024-10-28 15:30:09.321854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.689 qpair failed and we were unable to recover it. 00:34:22.689 [2024-10-28 15:30:09.322098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.689 [2024-10-28 15:30:09.322126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.689 qpair failed and we were unable to recover it. 00:34:22.689 [2024-10-28 15:30:09.322266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.689 [2024-10-28 15:30:09.322295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.689 qpair failed and we were unable to recover it. 00:34:22.689 [2024-10-28 15:30:09.322463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.689 [2024-10-28 15:30:09.322492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.689 qpair failed and we were unable to recover it. 00:34:22.689 [2024-10-28 15:30:09.322593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.689 [2024-10-28 15:30:09.322622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.689 qpair failed and we were unable to recover it. 00:34:22.689 [2024-10-28 15:30:09.322757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.689 [2024-10-28 15:30:09.322785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.689 qpair failed and we were unable to recover it. 00:34:22.689 [2024-10-28 15:30:09.322874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.689 [2024-10-28 15:30:09.322901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.689 qpair failed and we were unable to recover it. 00:34:22.689 [2024-10-28 15:30:09.323133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.689 [2024-10-28 15:30:09.323162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.689 qpair failed and we were unable to recover it. 00:34:22.689 [2024-10-28 15:30:09.323352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.689 [2024-10-28 15:30:09.323382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.689 qpair failed and we were unable to recover it. 00:34:22.689 [2024-10-28 15:30:09.323607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.689 [2024-10-28 15:30:09.323633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.689 qpair failed and we were unable to recover it. 00:34:22.689 [2024-10-28 15:30:09.323764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.689 [2024-10-28 15:30:09.323793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.689 qpair failed and we were unable to recover it. 00:34:22.689 [2024-10-28 15:30:09.323894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.689 [2024-10-28 15:30:09.323923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.689 qpair failed and we were unable to recover it. 00:34:22.689 [2024-10-28 15:30:09.324050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.689 [2024-10-28 15:30:09.324078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.689 qpair failed and we were unable to recover it. 00:34:22.689 [2024-10-28 15:30:09.324218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.689 [2024-10-28 15:30:09.324244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.689 qpair failed and we were unable to recover it. 00:34:22.689 [2024-10-28 15:30:09.324449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.689 [2024-10-28 15:30:09.324479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.689 qpair failed and we were unable to recover it. 00:34:22.689 [2024-10-28 15:30:09.324632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.689 [2024-10-28 15:30:09.324677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.689 qpair failed and we were unable to recover it. 00:34:22.689 [2024-10-28 15:30:09.324787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.690 [2024-10-28 15:30:09.324813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.690 qpair failed and we were unable to recover it. 00:34:22.690 [2024-10-28 15:30:09.324916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.690 [2024-10-28 15:30:09.324945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.690 qpair failed and we were unable to recover it. 00:34:22.690 [2024-10-28 15:30:09.325046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.690 [2024-10-28 15:30:09.325071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.690 qpair failed and we were unable to recover it. 00:34:22.690 [2024-10-28 15:30:09.325213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.690 [2024-10-28 15:30:09.325239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.690 qpair failed and we were unable to recover it. 00:34:22.690 [2024-10-28 15:30:09.325411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.690 [2024-10-28 15:30:09.325442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.690 qpair failed and we were unable to recover it. 00:34:22.690 [2024-10-28 15:30:09.325573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.690 [2024-10-28 15:30:09.325598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.690 qpair failed and we were unable to recover it. 00:34:22.690 [2024-10-28 15:30:09.325751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.690 [2024-10-28 15:30:09.325778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.690 qpair failed and we were unable to recover it. 00:34:22.690 [2024-10-28 15:30:09.325866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.690 [2024-10-28 15:30:09.325894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.690 qpair failed and we were unable to recover it. 00:34:22.690 [2024-10-28 15:30:09.326038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.690 [2024-10-28 15:30:09.326063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.690 qpair failed and we were unable to recover it. 00:34:22.690 [2024-10-28 15:30:09.326210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.690 [2024-10-28 15:30:09.326251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.690 qpair failed and we were unable to recover it. 00:34:22.690 [2024-10-28 15:30:09.326409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.690 [2024-10-28 15:30:09.326440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.690 qpair failed and we were unable to recover it. 00:34:22.690 [2024-10-28 15:30:09.326580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.690 [2024-10-28 15:30:09.326604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.690 qpair failed and we were unable to recover it. 00:34:22.690 [2024-10-28 15:30:09.326724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.690 [2024-10-28 15:30:09.326750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.690 qpair failed and we were unable to recover it. 00:34:22.690 [2024-10-28 15:30:09.326903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.690 [2024-10-28 15:30:09.326954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.690 qpair failed and we were unable to recover it. 00:34:22.690 [2024-10-28 15:30:09.327104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.690 [2024-10-28 15:30:09.327128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.690 qpair failed and we were unable to recover it. 00:34:22.690 [2024-10-28 15:30:09.327303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.690 [2024-10-28 15:30:09.327346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.690 qpair failed and we were unable to recover it. 00:34:22.690 [2024-10-28 15:30:09.327485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.690 [2024-10-28 15:30:09.327514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.690 qpair failed and we were unable to recover it. 00:34:22.690 [2024-10-28 15:30:09.327647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.690 [2024-10-28 15:30:09.327695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.690 qpair failed and we were unable to recover it. 00:34:22.690 [2024-10-28 15:30:09.327800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.690 [2024-10-28 15:30:09.327825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.690 qpair failed and we were unable to recover it. 00:34:22.690 [2024-10-28 15:30:09.327966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.690 [2024-10-28 15:30:09.327994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.690 qpair failed and we were unable to recover it. 00:34:22.690 [2024-10-28 15:30:09.328133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.690 [2024-10-28 15:30:09.328157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.690 qpair failed and we were unable to recover it. 00:34:22.690 [2024-10-28 15:30:09.328366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.690 [2024-10-28 15:30:09.328394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.690 qpair failed and we were unable to recover it. 00:34:22.690 [2024-10-28 15:30:09.328489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.690 [2024-10-28 15:30:09.328516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.690 qpair failed and we were unable to recover it. 00:34:22.690 [2024-10-28 15:30:09.328646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.690 [2024-10-28 15:30:09.328692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.690 qpair failed and we were unable to recover it. 00:34:22.690 [2024-10-28 15:30:09.328785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.690 [2024-10-28 15:30:09.328811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.690 qpair failed and we were unable to recover it. 00:34:22.690 [2024-10-28 15:30:09.328961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.690 [2024-10-28 15:30:09.328989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.690 qpair failed and we were unable to recover it. 00:34:22.690 [2024-10-28 15:30:09.329151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.690 [2024-10-28 15:30:09.329175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.690 qpair failed and we were unable to recover it. 00:34:22.690 [2024-10-28 15:30:09.329351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.690 [2024-10-28 15:30:09.329390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.690 qpair failed and we were unable to recover it. 00:34:22.690 [2024-10-28 15:30:09.329534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.690 [2024-10-28 15:30:09.329562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.690 qpair failed and we were unable to recover it. 00:34:22.690 [2024-10-28 15:30:09.329711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.690 [2024-10-28 15:30:09.329738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.690 qpair failed and we were unable to recover it. 00:34:22.690 [2024-10-28 15:30:09.329836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.691 [2024-10-28 15:30:09.329861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.691 qpair failed and we were unable to recover it. 00:34:22.691 [2024-10-28 15:30:09.329963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.691 [2024-10-28 15:30:09.329991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.691 qpair failed and we were unable to recover it. 00:34:22.691 [2024-10-28 15:30:09.330095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.691 [2024-10-28 15:30:09.330119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.691 qpair failed and we were unable to recover it. 00:34:22.691 [2024-10-28 15:30:09.330283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.691 [2024-10-28 15:30:09.330308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.691 qpair failed and we were unable to recover it. 00:34:22.691 [2024-10-28 15:30:09.330450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.691 [2024-10-28 15:30:09.330478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.691 qpair failed and we were unable to recover it. 00:34:22.691 [2024-10-28 15:30:09.330596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.691 [2024-10-28 15:30:09.330637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.691 qpair failed and we were unable to recover it. 00:34:22.691 [2024-10-28 15:30:09.330788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.691 [2024-10-28 15:30:09.330814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.691 qpair failed and we were unable to recover it. 00:34:22.691 [2024-10-28 15:30:09.330950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.691 [2024-10-28 15:30:09.330979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.691 qpair failed and we were unable to recover it. 00:34:22.691 [2024-10-28 15:30:09.331105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.691 [2024-10-28 15:30:09.331129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.691 qpair failed and we were unable to recover it. 00:34:22.691 [2024-10-28 15:30:09.331250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.691 [2024-10-28 15:30:09.331276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.691 qpair failed and we were unable to recover it. 00:34:22.691 [2024-10-28 15:30:09.331426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.691 [2024-10-28 15:30:09.331454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.691 qpair failed and we were unable to recover it. 00:34:22.691 [2024-10-28 15:30:09.331624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.691 [2024-10-28 15:30:09.331657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.691 qpair failed and we were unable to recover it. 00:34:22.691 [2024-10-28 15:30:09.331797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.691 [2024-10-28 15:30:09.331822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.691 qpair failed and we were unable to recover it. 00:34:22.691 [2024-10-28 15:30:09.331986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.691 [2024-10-28 15:30:09.332017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.691 qpair failed and we were unable to recover it. 00:34:22.691 [2024-10-28 15:30:09.332165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.691 [2024-10-28 15:30:09.332191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.691 qpair failed and we were unable to recover it. 00:34:22.691 [2024-10-28 15:30:09.332387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.691 [2024-10-28 15:30:09.332417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.691 qpair failed and we were unable to recover it. 00:34:22.691 [2024-10-28 15:30:09.332629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.691 [2024-10-28 15:30:09.332671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.691 qpair failed and we were unable to recover it. 00:34:22.691 [2024-10-28 15:30:09.332798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.691 [2024-10-28 15:30:09.332825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.691 qpair failed and we were unable to recover it. 00:34:22.691 [2024-10-28 15:30:09.332942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.691 [2024-10-28 15:30:09.332972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.691 qpair failed and we were unable to recover it. 00:34:22.691 [2024-10-28 15:30:09.333179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.691 [2024-10-28 15:30:09.333208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.691 qpair failed and we were unable to recover it. 00:34:22.691 [2024-10-28 15:30:09.333400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.691 [2024-10-28 15:30:09.333427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.691 qpair failed and we were unable to recover it. 00:34:22.691 [2024-10-28 15:30:09.333644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.691 [2024-10-28 15:30:09.333685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.691 qpair failed and we were unable to recover it. 00:34:22.691 [2024-10-28 15:30:09.333781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.691 [2024-10-28 15:30:09.333810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.691 qpair failed and we were unable to recover it. 00:34:22.691 [2024-10-28 15:30:09.333939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.691 [2024-10-28 15:30:09.333983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.691 qpair failed and we were unable to recover it. 00:34:22.691 [2024-10-28 15:30:09.334104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.691 [2024-10-28 15:30:09.334131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.691 qpair failed and we were unable to recover it. 00:34:22.691 [2024-10-28 15:30:09.334278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.691 [2024-10-28 15:30:09.334312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.691 qpair failed and we were unable to recover it. 00:34:22.691 [2024-10-28 15:30:09.334432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.691 [2024-10-28 15:30:09.334456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.691 qpair failed and we were unable to recover it. 00:34:22.691 [2024-10-28 15:30:09.334604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.691 [2024-10-28 15:30:09.334631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.691 qpair failed and we were unable to recover it. 00:34:22.691 [2024-10-28 15:30:09.334802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.691 [2024-10-28 15:30:09.334846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.691 qpair failed and we were unable to recover it. 00:34:22.691 [2024-10-28 15:30:09.335004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.691 [2024-10-28 15:30:09.335031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.691 qpair failed and we were unable to recover it. 00:34:22.691 [2024-10-28 15:30:09.335167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.691 [2024-10-28 15:30:09.335193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.691 qpair failed and we were unable to recover it. 00:34:22.691 [2024-10-28 15:30:09.335337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.692 [2024-10-28 15:30:09.335365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.692 qpair failed and we were unable to recover it. 00:34:22.692 [2024-10-28 15:30:09.335472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.692 [2024-10-28 15:30:09.335496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.692 qpair failed and we were unable to recover it. 00:34:22.692 [2024-10-28 15:30:09.335660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.692 [2024-10-28 15:30:09.335686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.692 qpair failed and we were unable to recover it. 00:34:22.692 [2024-10-28 15:30:09.335857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.692 [2024-10-28 15:30:09.335885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.692 qpair failed and we were unable to recover it. 00:34:22.692 [2024-10-28 15:30:09.336039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.692 [2024-10-28 15:30:09.336064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.692 qpair failed and we were unable to recover it. 00:34:22.692 [2024-10-28 15:30:09.336260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.692 [2024-10-28 15:30:09.336289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.692 qpair failed and we were unable to recover it. 00:34:22.692 [2024-10-28 15:30:09.336430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.692 [2024-10-28 15:30:09.336468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.692 qpair failed and we were unable to recover it. 00:34:22.692 [2024-10-28 15:30:09.336621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.692 [2024-10-28 15:30:09.336646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.692 qpair failed and we were unable to recover it. 00:34:22.692 [2024-10-28 15:30:09.336789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.692 [2024-10-28 15:30:09.336831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.692 qpair failed and we were unable to recover it. 00:34:22.692 [2024-10-28 15:30:09.336986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.692 [2024-10-28 15:30:09.337014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.692 qpair failed and we were unable to recover it. 00:34:22.692 [2024-10-28 15:30:09.337137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.692 [2024-10-28 15:30:09.337162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.692 qpair failed and we were unable to recover it. 00:34:22.692 [2024-10-28 15:30:09.337300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.692 [2024-10-28 15:30:09.337325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.692 qpair failed and we were unable to recover it. 00:34:22.692 [2024-10-28 15:30:09.337477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.692 [2024-10-28 15:30:09.337505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.692 qpair failed and we were unable to recover it. 00:34:22.692 [2024-10-28 15:30:09.337623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.692 [2024-10-28 15:30:09.337674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.692 qpair failed and we were unable to recover it. 00:34:22.692 [2024-10-28 15:30:09.337804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.692 [2024-10-28 15:30:09.337830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.692 qpair failed and we were unable to recover it. 00:34:22.692 [2024-10-28 15:30:09.337981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.692 [2024-10-28 15:30:09.338009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.692 qpair failed and we were unable to recover it. 00:34:22.692 [2024-10-28 15:30:09.338170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.692 [2024-10-28 15:30:09.338194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.692 qpair failed and we were unable to recover it. 00:34:22.692 [2024-10-28 15:30:09.338305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.692 [2024-10-28 15:30:09.338331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.692 qpair failed and we were unable to recover it. 00:34:22.692 [2024-10-28 15:30:09.338472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.692 [2024-10-28 15:30:09.338500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.692 qpair failed and we were unable to recover it. 00:34:22.692 [2024-10-28 15:30:09.338625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.692 [2024-10-28 15:30:09.338676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.692 qpair failed and we were unable to recover it. 00:34:22.692 [2024-10-28 15:30:09.338802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.692 [2024-10-28 15:30:09.338828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.692 qpair failed and we were unable to recover it. 00:34:22.692 [2024-10-28 15:30:09.338956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.692 [2024-10-28 15:30:09.338988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.692 qpair failed and we were unable to recover it. 00:34:22.692 [2024-10-28 15:30:09.339128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.692 [2024-10-28 15:30:09.339154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.692 qpair failed and we were unable to recover it. 00:34:22.692 [2024-10-28 15:30:09.339256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.692 [2024-10-28 15:30:09.339282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.692 qpair failed and we were unable to recover it. 00:34:22.692 [2024-10-28 15:30:09.339438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.692 [2024-10-28 15:30:09.339467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.692 qpair failed and we were unable to recover it. 00:34:22.692 [2024-10-28 15:30:09.339633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.692 [2024-10-28 15:30:09.339672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.692 qpair failed and we were unable to recover it. 00:34:22.692 [2024-10-28 15:30:09.339826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.692 [2024-10-28 15:30:09.339852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.692 qpair failed and we were unable to recover it. 00:34:22.692 [2024-10-28 15:30:09.339978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.692 [2024-10-28 15:30:09.340007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.692 qpair failed and we were unable to recover it. 00:34:22.692 [2024-10-28 15:30:09.340107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.692 [2024-10-28 15:30:09.340132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.692 qpair failed and we were unable to recover it. 00:34:22.692 [2024-10-28 15:30:09.340292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.692 [2024-10-28 15:30:09.340318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.692 qpair failed and we were unable to recover it. 00:34:22.692 [2024-10-28 15:30:09.340484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.692 [2024-10-28 15:30:09.340513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.692 qpair failed and we were unable to recover it. 00:34:22.692 [2024-10-28 15:30:09.340615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.340640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.693 qpair failed and we were unable to recover it. 00:34:22.693 [2024-10-28 15:30:09.340769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.340795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.693 qpair failed and we were unable to recover it. 00:34:22.693 [2024-10-28 15:30:09.340880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.340911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.693 qpair failed and we were unable to recover it. 00:34:22.693 [2024-10-28 15:30:09.341070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.341095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.693 qpair failed and we were unable to recover it. 00:34:22.693 [2024-10-28 15:30:09.341227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.341252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.693 qpair failed and we were unable to recover it. 00:34:22.693 [2024-10-28 15:30:09.341399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.341428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.693 qpair failed and we were unable to recover it. 00:34:22.693 [2024-10-28 15:30:09.341554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.341596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.693 qpair failed and we were unable to recover it. 00:34:22.693 [2024-10-28 15:30:09.341705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.341732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.693 qpair failed and we were unable to recover it. 00:34:22.693 [2024-10-28 15:30:09.341824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.341849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.693 qpair failed and we were unable to recover it. 00:34:22.693 [2024-10-28 15:30:09.341941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.341980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.693 qpair failed and we were unable to recover it. 00:34:22.693 [2024-10-28 15:30:09.342140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.342192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.693 qpair failed and we were unable to recover it. 00:34:22.693 [2024-10-28 15:30:09.342345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.342375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.693 qpair failed and we were unable to recover it. 00:34:22.693 [2024-10-28 15:30:09.342580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.342605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.693 qpair failed and we were unable to recover it. 00:34:22.693 [2024-10-28 15:30:09.342747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.342774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.693 qpair failed and we were unable to recover it. 00:34:22.693 [2024-10-28 15:30:09.342950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.342985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.693 qpair failed and we were unable to recover it. 00:34:22.693 [2024-10-28 15:30:09.343129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.343154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.693 qpair failed and we were unable to recover it. 00:34:22.693 [2024-10-28 15:30:09.343292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.343319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.693 qpair failed and we were unable to recover it. 00:34:22.693 [2024-10-28 15:30:09.343487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.343515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.693 qpair failed and we were unable to recover it. 00:34:22.693 [2024-10-28 15:30:09.343657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.343699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.693 qpair failed and we were unable to recover it. 00:34:22.693 [2024-10-28 15:30:09.343802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.343828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.693 qpair failed and we were unable to recover it. 00:34:22.693 [2024-10-28 15:30:09.343965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.343995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.693 qpair failed and we were unable to recover it. 00:34:22.693 [2024-10-28 15:30:09.344163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.344188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.693 qpair failed and we were unable to recover it. 00:34:22.693 [2024-10-28 15:30:09.344366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.344394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.693 qpair failed and we were unable to recover it. 00:34:22.693 [2024-10-28 15:30:09.344542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.344571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.693 qpair failed and we were unable to recover it. 00:34:22.693 [2024-10-28 15:30:09.344720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.344746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.693 qpair failed and we were unable to recover it. 00:34:22.693 [2024-10-28 15:30:09.344861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.344886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.693 qpair failed and we were unable to recover it. 00:34:22.693 [2024-10-28 15:30:09.345069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.345099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.693 qpair failed and we were unable to recover it. 00:34:22.693 [2024-10-28 15:30:09.345242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.345266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.693 qpair failed and we were unable to recover it. 00:34:22.693 [2024-10-28 15:30:09.345404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.345446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.693 qpair failed and we were unable to recover it. 00:34:22.693 [2024-10-28 15:30:09.345584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.345624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.693 qpair failed and we were unable to recover it. 00:34:22.693 [2024-10-28 15:30:09.345770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.345796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.693 qpair failed and we were unable to recover it. 00:34:22.693 [2024-10-28 15:30:09.345960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.693 [2024-10-28 15:30:09.345985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.694 qpair failed and we were unable to recover it. 00:34:22.694 [2024-10-28 15:30:09.346134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.694 [2024-10-28 15:30:09.346174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.694 qpair failed and we were unable to recover it. 00:34:22.694 [2024-10-28 15:30:09.346286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.694 [2024-10-28 15:30:09.346311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.694 qpair failed and we were unable to recover it. 00:34:22.694 [2024-10-28 15:30:09.346456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.694 [2024-10-28 15:30:09.346482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.694 qpair failed and we were unable to recover it. 00:34:22.694 [2024-10-28 15:30:09.346585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.694 [2024-10-28 15:30:09.346613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.694 qpair failed and we were unable to recover it. 00:34:22.694 [2024-10-28 15:30:09.346748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.694 [2024-10-28 15:30:09.346775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.694 qpair failed and we were unable to recover it. 00:34:22.694 [2024-10-28 15:30:09.346879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.694 [2024-10-28 15:30:09.346916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.694 qpair failed and we were unable to recover it. 00:34:22.694 [2024-10-28 15:30:09.347062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.694 [2024-10-28 15:30:09.347091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.694 qpair failed and we were unable to recover it. 00:34:22.694 [2024-10-28 15:30:09.347269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.694 [2024-10-28 15:30:09.347294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.694 qpair failed and we were unable to recover it. 00:34:22.694 [2024-10-28 15:30:09.347483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.694 [2024-10-28 15:30:09.347512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.694 qpair failed and we were unable to recover it. 00:34:22.694 [2024-10-28 15:30:09.347732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.694 [2024-10-28 15:30:09.347759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.694 qpair failed and we were unable to recover it. 00:34:22.694 [2024-10-28 15:30:09.347840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.694 [2024-10-28 15:30:09.347866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.694 qpair failed and we were unable to recover it. 00:34:22.694 [2024-10-28 15:30:09.347981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.694 [2024-10-28 15:30:09.348006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.694 qpair failed and we were unable to recover it. 00:34:22.694 [2024-10-28 15:30:09.348173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.694 [2024-10-28 15:30:09.348202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.694 qpair failed and we were unable to recover it. 00:34:22.694 [2024-10-28 15:30:09.348367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.694 [2024-10-28 15:30:09.348391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.694 qpair failed and we were unable to recover it. 00:34:22.694 [2024-10-28 15:30:09.348502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.694 [2024-10-28 15:30:09.348527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.694 qpair failed and we were unable to recover it. 00:34:22.694 [2024-10-28 15:30:09.348642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.694 [2024-10-28 15:30:09.348677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.694 qpair failed and we were unable to recover it. 00:34:22.694 [2024-10-28 15:30:09.348790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.694 [2024-10-28 15:30:09.348816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.694 qpair failed and we were unable to recover it. 00:34:22.694 [2024-10-28 15:30:09.348920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.694 [2024-10-28 15:30:09.348960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.694 qpair failed and we were unable to recover it. 00:34:22.694 [2024-10-28 15:30:09.349150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.694 [2024-10-28 15:30:09.349179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.694 qpair failed and we were unable to recover it. 00:34:22.694 [2024-10-28 15:30:09.349351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.694 [2024-10-28 15:30:09.349376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.694 qpair failed and we were unable to recover it. 00:34:22.694 [2024-10-28 15:30:09.349533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.694 [2024-10-28 15:30:09.349561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.694 qpair failed and we were unable to recover it. 00:34:22.694 [2024-10-28 15:30:09.349715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.694 [2024-10-28 15:30:09.349744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.694 qpair failed and we were unable to recover it. 00:34:22.694 [2024-10-28 15:30:09.349863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.694 [2024-10-28 15:30:09.349888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.694 qpair failed and we were unable to recover it. 00:34:22.694 [2024-10-28 15:30:09.350013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.694 [2024-10-28 15:30:09.350038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.694 qpair failed and we were unable to recover it. 00:34:22.694 [2024-10-28 15:30:09.350180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.694 [2024-10-28 15:30:09.350221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.694 qpair failed and we were unable to recover it. 00:34:22.694 [2024-10-28 15:30:09.350343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.695 [2024-10-28 15:30:09.350368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.695 qpair failed and we were unable to recover it. 00:34:22.695 [2024-10-28 15:30:09.350615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.695 [2024-10-28 15:30:09.350644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.695 qpair failed and we were unable to recover it. 00:34:22.695 [2024-10-28 15:30:09.350766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.695 [2024-10-28 15:30:09.350795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.695 qpair failed and we were unable to recover it. 00:34:22.695 [2024-10-28 15:30:09.350910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.695 [2024-10-28 15:30:09.350936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.695 qpair failed and we were unable to recover it. 00:34:22.695 [2024-10-28 15:30:09.351106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.695 [2024-10-28 15:30:09.351147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.695 qpair failed and we were unable to recover it. 00:34:22.695 [2024-10-28 15:30:09.351320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.695 [2024-10-28 15:30:09.351353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.695 qpair failed and we were unable to recover it. 00:34:22.695 [2024-10-28 15:30:09.351504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.695 [2024-10-28 15:30:09.351529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.695 qpair failed and we were unable to recover it. 00:34:22.695 [2024-10-28 15:30:09.351635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.695 [2024-10-28 15:30:09.351695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.695 qpair failed and we were unable to recover it. 00:34:22.695 [2024-10-28 15:30:09.351819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.695 [2024-10-28 15:30:09.351848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.695 qpair failed and we were unable to recover it. 00:34:22.695 [2024-10-28 15:30:09.351969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.695 [2024-10-28 15:30:09.351993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.695 qpair failed and we were unable to recover it. 00:34:22.695 [2024-10-28 15:30:09.352135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.695 [2024-10-28 15:30:09.352159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.695 qpair failed and we were unable to recover it. 00:34:22.695 [2024-10-28 15:30:09.352349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.695 [2024-10-28 15:30:09.352379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.695 qpair failed and we were unable to recover it. 00:34:22.695 [2024-10-28 15:30:09.352508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.695 [2024-10-28 15:30:09.352555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.695 qpair failed and we were unable to recover it. 00:34:22.695 [2024-10-28 15:30:09.352702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.695 [2024-10-28 15:30:09.352729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.695 qpair failed and we were unable to recover it. 00:34:22.695 [2024-10-28 15:30:09.352826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.695 [2024-10-28 15:30:09.352852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.695 qpair failed and we were unable to recover it. 00:34:22.695 [2024-10-28 15:30:09.352945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.695 [2024-10-28 15:30:09.352969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.695 qpair failed and we were unable to recover it. 00:34:22.695 [2024-10-28 15:30:09.353114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.695 [2024-10-28 15:30:09.353139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.695 qpair failed and we were unable to recover it. 00:34:22.695 [2024-10-28 15:30:09.353284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.695 [2024-10-28 15:30:09.353313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.695 qpair failed and we were unable to recover it. 00:34:22.695 [2024-10-28 15:30:09.353438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.695 [2024-10-28 15:30:09.353463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.695 qpair failed and we were unable to recover it. 00:34:22.695 [2024-10-28 15:30:09.353627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.695 [2024-10-28 15:30:09.353676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.695 qpair failed and we were unable to recover it. 00:34:22.695 [2024-10-28 15:30:09.353835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.695 [2024-10-28 15:30:09.353864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.695 qpair failed and we were unable to recover it. 00:34:22.695 [2024-10-28 15:30:09.353982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.695 [2024-10-28 15:30:09.354022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.695 qpair failed and we were unable to recover it. 00:34:22.695 [2024-10-28 15:30:09.354166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.695 [2024-10-28 15:30:09.354191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.695 qpair failed and we were unable to recover it. 00:34:22.695 [2024-10-28 15:30:09.354369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.695 [2024-10-28 15:30:09.354397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.695 qpair failed and we were unable to recover it. 00:34:22.695 [2024-10-28 15:30:09.354520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.695 [2024-10-28 15:30:09.354545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.695 qpair failed and we were unable to recover it. 00:34:22.695 [2024-10-28 15:30:09.354669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.695 [2024-10-28 15:30:09.354697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.695 qpair failed and we were unable to recover it. 00:34:22.695 [2024-10-28 15:30:09.354848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.695 [2024-10-28 15:30:09.354878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.695 qpair failed and we were unable to recover it. 00:34:22.695 [2024-10-28 15:30:09.355038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.695 [2024-10-28 15:30:09.355061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.695 qpair failed and we were unable to recover it. 00:34:22.695 [2024-10-28 15:30:09.355196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.695 [2024-10-28 15:30:09.355221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.695 qpair failed and we were unable to recover it. 00:34:22.695 [2024-10-28 15:30:09.355382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.695 [2024-10-28 15:30:09.355411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.695 qpair failed and we were unable to recover it. 00:34:22.695 [2024-10-28 15:30:09.355537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.355562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.355705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.355731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.355846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.355874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.356026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.356066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.356240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.356269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.356417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.356446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.356580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.356605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.356763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.356790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.356962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.356990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.357143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.357167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.357309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.357352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.357488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.357516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.357677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.357710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.357839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.357883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.358068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.358096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.358257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.358281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.358444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.358468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.358642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.358688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.358806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.358833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.358989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.359020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.359229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.359257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.359434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.359458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.359608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.359642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.359780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.359808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.359923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.359963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.360117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.360142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.360282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.360316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.360541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.360566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.360715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.360756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.360887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.360915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.361015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.361040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.361180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.361206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.361375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.361403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.361576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.696 [2024-10-28 15:30:09.361605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.696 qpair failed and we were unable to recover it. 00:34:22.696 [2024-10-28 15:30:09.361734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.361760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.361859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.361885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.362040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.362079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.362225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.362254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.362384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.362422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.362583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.362613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.362748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.362774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.362872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.362897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.363047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.363097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.363265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.363293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.363438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.363470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.363646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.363686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.363806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.363848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.363993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.364029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.364151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.364189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.364301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.364326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.364542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.364570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.364738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.364764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.364887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.364929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.365091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.365120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.365270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.365294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.365435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.365474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.365637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.365672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.365807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.365832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.365931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.365972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.366116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.366145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.366329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.366353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.366510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.366543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.366725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.366760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.366863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.366895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.367026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.367051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.367195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.697 [2024-10-28 15:30:09.367238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.697 qpair failed and we were unable to recover it. 00:34:22.697 [2024-10-28 15:30:09.367403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.367428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.367600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.367628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.367781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.367806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.367956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.368006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.368169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.368208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.368352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.368380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.368481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.368505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.368662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.368702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.368822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.368850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.368951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.368976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.369127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.369152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.369287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.369315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.369538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.369562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.369741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.369766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.369889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.369917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.370161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.370186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.370326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.370354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.370508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.370536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.370720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.370746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.370850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.370892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.371062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.371091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.371244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.371268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.371413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.371455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.371592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.371621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.371761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.371786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.371908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.371934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.372096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.372125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.372327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.372351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.372556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.372585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.372710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.372735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.372834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.372859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.373017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.373058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.373225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.373264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.373419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.698 [2024-10-28 15:30:09.373452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.698 qpair failed and we were unable to recover it. 00:34:22.698 [2024-10-28 15:30:09.373627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.699 [2024-10-28 15:30:09.373662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.699 qpair failed and we were unable to recover it. 00:34:22.699 [2024-10-28 15:30:09.373764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.699 [2024-10-28 15:30:09.373792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.699 qpair failed and we were unable to recover it. 00:34:22.699 [2024-10-28 15:30:09.373940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.699 [2024-10-28 15:30:09.373969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.699 qpair failed and we were unable to recover it. 00:34:22.699 [2024-10-28 15:30:09.374163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.699 [2024-10-28 15:30:09.374192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.699 qpair failed and we were unable to recover it. 00:34:22.699 [2024-10-28 15:30:09.374312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.699 [2024-10-28 15:30:09.374340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.699 qpair failed and we were unable to recover it. 00:34:22.699 [2024-10-28 15:30:09.374493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.699 [2024-10-28 15:30:09.374525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.699 qpair failed and we were unable to recover it. 00:34:22.699 [2024-10-28 15:30:09.374711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.699 [2024-10-28 15:30:09.374741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.699 qpair failed and we were unable to recover it. 00:34:22.699 [2024-10-28 15:30:09.374853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.699 [2024-10-28 15:30:09.374881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.699 qpair failed and we were unable to recover it. 00:34:22.699 [2024-10-28 15:30:09.374981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.699 [2024-10-28 15:30:09.375006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.699 qpair failed and we were unable to recover it. 00:34:22.699 [2024-10-28 15:30:09.375146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.699 [2024-10-28 15:30:09.375172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.699 qpair failed and we were unable to recover it. 00:34:22.699 [2024-10-28 15:30:09.375304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.699 [2024-10-28 15:30:09.375343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.699 qpair failed and we were unable to recover it. 00:34:22.699 [2024-10-28 15:30:09.375447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.699 [2024-10-28 15:30:09.375479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.699 qpair failed and we were unable to recover it. 00:34:22.699 [2024-10-28 15:30:09.375640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.699 [2024-10-28 15:30:09.375686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.699 qpair failed and we were unable to recover it. 00:34:22.699 [2024-10-28 15:30:09.375788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.699 [2024-10-28 15:30:09.375832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.699 qpair failed and we were unable to recover it. 00:34:22.699 [2024-10-28 15:30:09.375997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.699 [2024-10-28 15:30:09.376022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.699 qpair failed and we were unable to recover it. 00:34:22.699 [2024-10-28 15:30:09.376200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.699 [2024-10-28 15:30:09.376238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.699 qpair failed and we were unable to recover it. 00:34:22.699 [2024-10-28 15:30:09.376358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.699 [2024-10-28 15:30:09.376386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.699 qpair failed and we were unable to recover it. 00:34:22.699 [2024-10-28 15:30:09.376589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.699 [2024-10-28 15:30:09.376613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.699 qpair failed and we were unable to recover it. 00:34:22.699 [2024-10-28 15:30:09.376757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.699 [2024-10-28 15:30:09.376797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.699 qpair failed and we were unable to recover it. 00:34:22.699 [2024-10-28 15:30:09.376940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.699 [2024-10-28 15:30:09.376968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.699 qpair failed and we were unable to recover it. 00:34:22.699 [2024-10-28 15:30:09.377070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.699 [2024-10-28 15:30:09.377094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.699 qpair failed and we were unable to recover it. 00:34:22.699 [2024-10-28 15:30:09.377248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.699 [2024-10-28 15:30:09.377273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.699 qpair failed and we were unable to recover it. 00:34:22.699 [2024-10-28 15:30:09.377487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.699 [2024-10-28 15:30:09.377516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.699 qpair failed and we were unable to recover it. 00:34:22.699 [2024-10-28 15:30:09.377683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.699 [2024-10-28 15:30:09.377709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.699 qpair failed and we were unable to recover it. 00:34:22.699 [2024-10-28 15:30:09.377816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.699 [2024-10-28 15:30:09.377842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.699 qpair failed and we were unable to recover it. 00:34:22.699 [2024-10-28 15:30:09.377986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.699 [2024-10-28 15:30:09.378014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.699 qpair failed and we were unable to recover it. 00:34:22.699 [2024-10-28 15:30:09.378143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.699 [2024-10-28 15:30:09.378184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.699 qpair failed and we were unable to recover it. 00:34:22.699 [2024-10-28 15:30:09.378326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.699 [2024-10-28 15:30:09.378365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.699 qpair failed and we were unable to recover it. 00:34:22.699 [2024-10-28 15:30:09.378583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.699 [2024-10-28 15:30:09.378612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.699 qpair failed and we were unable to recover it. 00:34:22.699 [2024-10-28 15:30:09.378747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.699 [2024-10-28 15:30:09.378773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.699 qpair failed and we were unable to recover it. 00:34:22.699 [2024-10-28 15:30:09.378896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.699 [2024-10-28 15:30:09.378922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.699 qpair failed and we were unable to recover it. 00:34:22.700 [2024-10-28 15:30:09.379105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.700 [2024-10-28 15:30:09.379134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.700 qpair failed and we were unable to recover it. 00:34:22.700 [2024-10-28 15:30:09.379327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.700 [2024-10-28 15:30:09.379351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.700 qpair failed and we were unable to recover it. 00:34:22.700 [2024-10-28 15:30:09.379553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.700 [2024-10-28 15:30:09.379581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.700 qpair failed and we were unable to recover it. 00:34:22.700 [2024-10-28 15:30:09.379759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.700 [2024-10-28 15:30:09.379785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.700 qpair failed and we were unable to recover it. 00:34:22.700 [2024-10-28 15:30:09.379883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.700 [2024-10-28 15:30:09.379908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.700 qpair failed and we were unable to recover it. 00:34:22.700 [2024-10-28 15:30:09.379995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.700 [2024-10-28 15:30:09.380020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.700 qpair failed and we were unable to recover it. 00:34:22.700 [2024-10-28 15:30:09.380196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.700 [2024-10-28 15:30:09.380234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.700 qpair failed and we were unable to recover it. 00:34:22.700 [2024-10-28 15:30:09.380354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.700 [2024-10-28 15:30:09.380393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.700 qpair failed and we were unable to recover it. 00:34:22.700 [2024-10-28 15:30:09.380551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.700 [2024-10-28 15:30:09.380593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.700 qpair failed and we were unable to recover it. 00:34:22.700 [2024-10-28 15:30:09.380735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.700 [2024-10-28 15:30:09.380765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.700 qpair failed and we were unable to recover it. 00:34:22.700 [2024-10-28 15:30:09.380864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.700 [2024-10-28 15:30:09.380890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.700 qpair failed and we were unable to recover it. 00:34:22.700 [2024-10-28 15:30:09.381049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.700 [2024-10-28 15:30:09.381078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.700 qpair failed and we were unable to recover it. 00:34:22.700 [2024-10-28 15:30:09.381272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.700 [2024-10-28 15:30:09.381301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.700 qpair failed and we were unable to recover it. 00:34:22.700 [2024-10-28 15:30:09.381406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.700 [2024-10-28 15:30:09.381431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.700 qpair failed and we were unable to recover it. 00:34:22.700 [2024-10-28 15:30:09.381569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.700 [2024-10-28 15:30:09.381594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.700 qpair failed and we were unable to recover it. 00:34:22.700 [2024-10-28 15:30:09.381737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.700 [2024-10-28 15:30:09.381767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.700 qpair failed and we were unable to recover it. 00:34:22.700 [2024-10-28 15:30:09.381926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.700 [2024-10-28 15:30:09.381952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.700 qpair failed and we were unable to recover it. 00:34:22.700 [2024-10-28 15:30:09.382104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.700 [2024-10-28 15:30:09.382132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.700 qpair failed and we were unable to recover it. 00:34:22.700 [2024-10-28 15:30:09.382268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.700 [2024-10-28 15:30:09.382304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.700 qpair failed and we were unable to recover it. 00:34:22.700 [2024-10-28 15:30:09.382486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.700 [2024-10-28 15:30:09.382510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.700 qpair failed and we were unable to recover it. 00:34:22.700 [2024-10-28 15:30:09.382697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.700 [2024-10-28 15:30:09.382726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.700 qpair failed and we were unable to recover it. 00:34:22.700 [2024-10-28 15:30:09.382859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.700 [2024-10-28 15:30:09.382887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.700 qpair failed and we were unable to recover it. 00:34:22.700 [2024-10-28 15:30:09.383064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.700 [2024-10-28 15:30:09.383089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.700 qpair failed and we were unable to recover it. 00:34:22.700 [2024-10-28 15:30:09.383259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.700 [2024-10-28 15:30:09.383288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.700 qpair failed and we were unable to recover it. 00:34:22.700 [2024-10-28 15:30:09.383481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.700 [2024-10-28 15:30:09.383510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.700 qpair failed and we were unable to recover it. 00:34:22.700 [2024-10-28 15:30:09.383624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.700 [2024-10-28 15:30:09.383671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.700 qpair failed and we were unable to recover it. 00:34:22.700 [2024-10-28 15:30:09.383782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.701 [2024-10-28 15:30:09.383809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.701 qpair failed and we were unable to recover it. 00:34:22.701 [2024-10-28 15:30:09.383962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.701 [2024-10-28 15:30:09.383991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.701 qpair failed and we were unable to recover it. 00:34:22.701 [2024-10-28 15:30:09.384089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.701 [2024-10-28 15:30:09.384114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.701 qpair failed and we were unable to recover it. 00:34:22.701 [2024-10-28 15:30:09.384281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.701 [2024-10-28 15:30:09.384305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.701 qpair failed and we were unable to recover it. 00:34:22.701 [2024-10-28 15:30:09.384441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.701 [2024-10-28 15:30:09.384470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.701 qpair failed and we were unable to recover it. 00:34:22.701 [2024-10-28 15:30:09.384635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.701 [2024-10-28 15:30:09.384671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.701 qpair failed and we were unable to recover it. 00:34:22.701 [2024-10-28 15:30:09.384791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.701 [2024-10-28 15:30:09.384816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.701 qpair failed and we were unable to recover it. 00:34:22.701 [2024-10-28 15:30:09.384936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.701 [2024-10-28 15:30:09.384962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.701 qpair failed and we were unable to recover it. 00:34:22.701 [2024-10-28 15:30:09.385089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.701 [2024-10-28 15:30:09.385114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.701 qpair failed and we were unable to recover it. 00:34:22.701 [2024-10-28 15:30:09.385252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.701 [2024-10-28 15:30:09.385295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.701 qpair failed and we were unable to recover it. 00:34:22.701 [2024-10-28 15:30:09.385456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.701 [2024-10-28 15:30:09.385484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.701 qpair failed and we were unable to recover it. 00:34:22.701 [2024-10-28 15:30:09.385630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.701 [2024-10-28 15:30:09.385661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.701 qpair failed and we were unable to recover it. 00:34:22.701 [2024-10-28 15:30:09.385775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.701 [2024-10-28 15:30:09.385801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.701 qpair failed and we were unable to recover it. 00:34:22.701 [2024-10-28 15:30:09.385958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.701 [2024-10-28 15:30:09.385987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.701 qpair failed and we were unable to recover it. 00:34:22.701 [2024-10-28 15:30:09.386170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.701 [2024-10-28 15:30:09.386195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.701 qpair failed and we were unable to recover it. 00:34:22.701 [2024-10-28 15:30:09.386330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.701 [2024-10-28 15:30:09.386373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.701 qpair failed and we were unable to recover it. 00:34:22.701 [2024-10-28 15:30:09.386561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.701 [2024-10-28 15:30:09.386590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.701 qpair failed and we were unable to recover it. 00:34:22.701 [2024-10-28 15:30:09.386732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.701 [2024-10-28 15:30:09.386759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.701 qpair failed and we were unable to recover it. 00:34:22.701 [2024-10-28 15:30:09.386859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.701 [2024-10-28 15:30:09.386885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.701 qpair failed and we were unable to recover it. 00:34:22.701 [2024-10-28 15:30:09.387093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.701 [2024-10-28 15:30:09.387122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.701 qpair failed and we were unable to recover it. 00:34:22.701 [2024-10-28 15:30:09.387284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.701 [2024-10-28 15:30:09.387309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.701 qpair failed and we were unable to recover it. 00:34:22.701 [2024-10-28 15:30:09.387452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.701 [2024-10-28 15:30:09.387495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.701 qpair failed and we were unable to recover it. 00:34:22.701 [2024-10-28 15:30:09.387628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.701 [2024-10-28 15:30:09.387663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.701 qpair failed and we were unable to recover it. 00:34:22.701 [2024-10-28 15:30:09.387790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.701 [2024-10-28 15:30:09.387815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.701 qpair failed and we were unable to recover it. 00:34:22.701 [2024-10-28 15:30:09.388034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.701 [2024-10-28 15:30:09.388064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.701 qpair failed and we were unable to recover it. 00:34:22.701 [2024-10-28 15:30:09.388172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.701 [2024-10-28 15:30:09.388215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.701 qpair failed and we were unable to recover it. 00:34:22.701 [2024-10-28 15:30:09.388364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.701 [2024-10-28 15:30:09.388389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.701 qpair failed and we were unable to recover it. 00:34:22.701 [2024-10-28 15:30:09.388524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.701 [2024-10-28 15:30:09.388564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.701 qpair failed and we were unable to recover it. 00:34:22.701 [2024-10-28 15:30:09.388691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.701 [2024-10-28 15:30:09.388720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.701 qpair failed and we were unable to recover it. 00:34:22.701 [2024-10-28 15:30:09.388829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.701 [2024-10-28 15:30:09.388854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.701 qpair failed and we were unable to recover it. 00:34:22.701 [2024-10-28 15:30:09.389015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.701 [2024-10-28 15:30:09.389040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.701 qpair failed and we were unable to recover it. 00:34:22.701 [2024-10-28 15:30:09.389158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.389186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.389340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.389365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.389515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.389558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.389702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.389728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.389846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.389871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.390073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.390101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.390239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.390267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.390429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.390467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.390631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.390674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.390786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.390815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.390948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.390972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.391088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.391113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.391233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.391261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.391484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.391508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.391730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.391760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.391888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.391916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.392056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.392095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.392242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.392285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.392432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.392460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.392639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.392669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.392820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.392849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.392990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.393019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.393155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.393194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.393337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.393378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.393526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.393555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.393722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.393748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.393849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.393876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.394017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.394045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.394221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.394245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.394442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.394472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.394675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.394718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.394803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.394830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.394991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.395016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.702 qpair failed and we were unable to recover it. 00:34:22.702 [2024-10-28 15:30:09.395158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.702 [2024-10-28 15:30:09.395186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.395319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.395344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.395530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.395554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.395662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.395692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.395846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.395871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.396025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.396069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.396245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.396273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.396369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.396394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.396555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.396581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.396700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.396741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.396871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.396896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.397035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.397060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.397228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.397257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.397413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.397438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.397557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.397582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.397720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.397749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.397867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.397893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.398029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.398054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.398179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.398208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.398356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.398398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.398559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.398588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.398714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.398743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.398868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.398894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.399059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.399102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.399221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.399250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.399345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.399370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.399530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.399555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.399727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.399754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.399855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.399885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.400027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.400053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.400234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.400263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.400424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.400448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.703 [2024-10-28 15:30:09.400590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.703 [2024-10-28 15:30:09.400633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.703 qpair failed and we were unable to recover it. 00:34:22.704 [2024-10-28 15:30:09.400789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.704 [2024-10-28 15:30:09.400819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.704 qpair failed and we were unable to recover it. 00:34:22.704 [2024-10-28 15:30:09.400947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.704 [2024-10-28 15:30:09.400988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.704 qpair failed and we were unable to recover it. 00:34:22.704 [2024-10-28 15:30:09.401086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.704 [2024-10-28 15:30:09.401111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.704 qpair failed and we were unable to recover it. 00:34:22.704 [2024-10-28 15:30:09.401265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.704 [2024-10-28 15:30:09.401294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.704 qpair failed and we were unable to recover it. 00:34:22.704 [2024-10-28 15:30:09.401439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.704 [2024-10-28 15:30:09.401480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.704 qpair failed and we were unable to recover it. 00:34:22.704 [2024-10-28 15:30:09.401696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.704 [2024-10-28 15:30:09.401722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.704 qpair failed and we were unable to recover it. 00:34:22.704 [2024-10-28 15:30:09.401848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.704 [2024-10-28 15:30:09.401877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.704 qpair failed and we were unable to recover it. 00:34:22.704 [2024-10-28 15:30:09.402062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.704 [2024-10-28 15:30:09.402087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.704 qpair failed and we were unable to recover it. 00:34:22.704 [2024-10-28 15:30:09.402274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.704 [2024-10-28 15:30:09.402303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.704 qpair failed and we were unable to recover it. 00:34:22.704 [2024-10-28 15:30:09.402484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.704 [2024-10-28 15:30:09.402513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.704 qpair failed and we were unable to recover it. 00:34:22.704 [2024-10-28 15:30:09.402643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.704 [2024-10-28 15:30:09.402675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.704 qpair failed and we were unable to recover it. 00:34:22.704 [2024-10-28 15:30:09.402767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.704 [2024-10-28 15:30:09.402793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.704 qpair failed and we were unable to recover it. 00:34:22.704 [2024-10-28 15:30:09.402947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.704 [2024-10-28 15:30:09.402976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.704 qpair failed and we were unable to recover it. 00:34:22.704 [2024-10-28 15:30:09.403077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.704 [2024-10-28 15:30:09.403102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.704 qpair failed and we were unable to recover it. 00:34:22.704 [2024-10-28 15:30:09.403287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.704 [2024-10-28 15:30:09.403328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.704 qpair failed and we were unable to recover it. 00:34:22.704 [2024-10-28 15:30:09.403462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.704 [2024-10-28 15:30:09.403491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.704 qpair failed and we were unable to recover it. 00:34:22.704 [2024-10-28 15:30:09.403635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.704 [2024-10-28 15:30:09.403665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.704 qpair failed and we were unable to recover it. 00:34:22.704 [2024-10-28 15:30:09.403853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.704 [2024-10-28 15:30:09.403881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.704 qpair failed and we were unable to recover it. 00:34:22.704 [2024-10-28 15:30:09.404053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.704 [2024-10-28 15:30:09.404096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.704 qpair failed and we were unable to recover it. 00:34:22.704 [2024-10-28 15:30:09.404262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.704 [2024-10-28 15:30:09.404293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.704 qpair failed and we were unable to recover it. 00:34:22.704 [2024-10-28 15:30:09.404464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.704 [2024-10-28 15:30:09.404492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.704 qpair failed and we were unable to recover it. 00:34:22.704 [2024-10-28 15:30:09.404677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.704 [2024-10-28 15:30:09.404718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.704 qpair failed and we were unable to recover it. 00:34:22.704 [2024-10-28 15:30:09.404833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.704 [2024-10-28 15:30:09.404860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.704 qpair failed and we were unable to recover it. 00:34:22.704 [2024-10-28 15:30:09.404980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.704 [2024-10-28 15:30:09.405006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.704 qpair failed and we were unable to recover it. 00:34:22.704 [2024-10-28 15:30:09.405171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.704 [2024-10-28 15:30:09.405199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.704 qpair failed and we were unable to recover it. 00:34:22.704 [2024-10-28 15:30:09.405327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.704 [2024-10-28 15:30:09.405353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.704 qpair failed and we were unable to recover it. 00:34:22.704 [2024-10-28 15:30:09.405516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.704 [2024-10-28 15:30:09.405542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.704 qpair failed and we were unable to recover it. 00:34:22.704 [2024-10-28 15:30:09.405743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.704 [2024-10-28 15:30:09.405787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.704 qpair failed and we were unable to recover it. 00:34:22.704 [2024-10-28 15:30:09.405949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.704 [2024-10-28 15:30:09.405991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.704 qpair failed and we were unable to recover it. 00:34:22.704 [2024-10-28 15:30:09.406145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.704 [2024-10-28 15:30:09.406174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.704 qpair failed and we were unable to recover it. 00:34:22.704 [2024-10-28 15:30:09.406337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.705 [2024-10-28 15:30:09.406366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.705 qpair failed and we were unable to recover it. 00:34:22.705 [2024-10-28 15:30:09.406481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.705 [2024-10-28 15:30:09.406520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.705 qpair failed and we were unable to recover it. 00:34:22.705 [2024-10-28 15:30:09.406718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.705 [2024-10-28 15:30:09.406748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.705 qpair failed and we were unable to recover it. 00:34:22.705 [2024-10-28 15:30:09.406882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.705 [2024-10-28 15:30:09.406911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.705 qpair failed and we were unable to recover it. 00:34:22.705 [2024-10-28 15:30:09.407010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.705 [2024-10-28 15:30:09.407036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.705 qpair failed and we were unable to recover it. 00:34:22.705 [2024-10-28 15:30:09.407225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.705 [2024-10-28 15:30:09.407273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.705 qpair failed and we were unable to recover it. 00:34:22.705 [2024-10-28 15:30:09.407444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.705 [2024-10-28 15:30:09.407474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.705 qpair failed and we were unable to recover it. 00:34:22.705 [2024-10-28 15:30:09.407579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.705 [2024-10-28 15:30:09.407604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.705 qpair failed and we were unable to recover it. 00:34:22.705 [2024-10-28 15:30:09.407768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.705 [2024-10-28 15:30:09.407795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.705 qpair failed and we were unable to recover it. 00:34:22.705 [2024-10-28 15:30:09.407958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.705 [2024-10-28 15:30:09.407987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.705 qpair failed and we were unable to recover it. 00:34:22.705 [2024-10-28 15:30:09.408167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.705 [2024-10-28 15:30:09.408192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.705 qpair failed and we were unable to recover it. 00:34:22.705 [2024-10-28 15:30:09.408361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.705 [2024-10-28 15:30:09.408390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.705 qpair failed and we were unable to recover it. 00:34:22.705 [2024-10-28 15:30:09.408497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.705 [2024-10-28 15:30:09.408527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.705 qpair failed and we were unable to recover it. 00:34:22.705 [2024-10-28 15:30:09.408690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.705 [2024-10-28 15:30:09.408717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.705 qpair failed and we were unable to recover it. 00:34:22.705 [2024-10-28 15:30:09.408818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.705 [2024-10-28 15:30:09.408844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.705 qpair failed and we were unable to recover it. 00:34:22.705 [2024-10-28 15:30:09.409026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.705 [2024-10-28 15:30:09.409055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.705 qpair failed and we were unable to recover it. 00:34:22.705 [2024-10-28 15:30:09.409195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.705 [2024-10-28 15:30:09.409219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.705 qpair failed and we were unable to recover it. 00:34:22.705 [2024-10-28 15:30:09.409411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.705 [2024-10-28 15:30:09.409440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.705 qpair failed and we were unable to recover it. 00:34:22.705 [2024-10-28 15:30:09.409595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.705 [2024-10-28 15:30:09.409623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.705 qpair failed and we were unable to recover it. 00:34:22.705 [2024-10-28 15:30:09.409800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.705 [2024-10-28 15:30:09.409826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.705 qpair failed and we were unable to recover it. 00:34:22.705 [2024-10-28 15:30:09.409960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.705 [2024-10-28 15:30:09.409985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.705 qpair failed and we were unable to recover it. 00:34:22.705 [2024-10-28 15:30:09.410113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.705 [2024-10-28 15:30:09.410141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.705 qpair failed and we were unable to recover it. 00:34:22.705 [2024-10-28 15:30:09.410312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.705 [2024-10-28 15:30:09.410337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.705 qpair failed and we were unable to recover it. 00:34:22.705 [2024-10-28 15:30:09.410498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.705 [2024-10-28 15:30:09.410526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.705 qpair failed and we were unable to recover it. 00:34:22.705 [2024-10-28 15:30:09.410665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.705 [2024-10-28 15:30:09.410694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.705 qpair failed and we were unable to recover it. 00:34:22.705 [2024-10-28 15:30:09.410813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.705 [2024-10-28 15:30:09.410839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.705 qpair failed and we were unable to recover it. 00:34:22.705 [2024-10-28 15:30:09.411003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.705 [2024-10-28 15:30:09.411044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.705 qpair failed and we were unable to recover it. 00:34:22.705 [2024-10-28 15:30:09.411180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.705 [2024-10-28 15:30:09.411208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.705 qpair failed and we were unable to recover it. 00:34:22.705 [2024-10-28 15:30:09.411370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-10-28 15:30:09.411395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.706 qpair failed and we were unable to recover it. 00:34:22.706 [2024-10-28 15:30:09.411535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-10-28 15:30:09.411576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.706 qpair failed and we were unable to recover it. 00:34:22.706 [2024-10-28 15:30:09.411729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-10-28 15:30:09.411759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.706 qpair failed and we were unable to recover it. 00:34:22.706 [2024-10-28 15:30:09.411884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-10-28 15:30:09.411926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.706 qpair failed and we were unable to recover it. 00:34:22.706 [2024-10-28 15:30:09.412056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-10-28 15:30:09.412082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.706 qpair failed and we were unable to recover it. 00:34:22.706 [2024-10-28 15:30:09.412262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-10-28 15:30:09.412291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.706 qpair failed and we were unable to recover it. 00:34:22.706 [2024-10-28 15:30:09.412464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-10-28 15:30:09.412489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.706 qpair failed and we were unable to recover it. 00:34:22.706 [2024-10-28 15:30:09.412668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-10-28 15:30:09.412710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.706 qpair failed and we were unable to recover it. 00:34:22.706 [2024-10-28 15:30:09.412825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-10-28 15:30:09.412854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.706 qpair failed and we were unable to recover it. 00:34:22.706 [2024-10-28 15:30:09.412996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-10-28 15:30:09.413036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.706 qpair failed and we were unable to recover it. 00:34:22.706 [2024-10-28 15:30:09.413173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-10-28 15:30:09.413227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.706 qpair failed and we were unable to recover it. 00:34:22.706 [2024-10-28 15:30:09.413360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-10-28 15:30:09.413389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.706 qpair failed and we were unable to recover it. 00:34:22.706 [2024-10-28 15:30:09.413529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-10-28 15:30:09.413556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.706 qpair failed and we were unable to recover it. 00:34:22.706 [2024-10-28 15:30:09.413700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-10-28 15:30:09.413726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.706 qpair failed and we were unable to recover it. 00:34:22.706 [2024-10-28 15:30:09.413904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-10-28 15:30:09.413933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.706 qpair failed and we were unable to recover it. 00:34:22.706 [2024-10-28 15:30:09.414076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-10-28 15:30:09.414101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.706 qpair failed and we were unable to recover it. 00:34:22.706 [2024-10-28 15:30:09.414285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-10-28 15:30:09.414314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.706 qpair failed and we were unable to recover it. 00:34:22.706 [2024-10-28 15:30:09.414445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-10-28 15:30:09.414478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.706 qpair failed and we were unable to recover it. 00:34:22.706 [2024-10-28 15:30:09.414631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-10-28 15:30:09.414667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.706 qpair failed and we were unable to recover it. 00:34:22.706 [2024-10-28 15:30:09.414787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-10-28 15:30:09.414814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.706 qpair failed and we were unable to recover it. 00:34:22.706 [2024-10-28 15:30:09.414908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-10-28 15:30:09.414950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.706 qpair failed and we were unable to recover it. 00:34:22.706 [2024-10-28 15:30:09.415104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-10-28 15:30:09.415129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.706 qpair failed and we were unable to recover it. 00:34:22.706 [2024-10-28 15:30:09.415319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-10-28 15:30:09.415345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.706 qpair failed and we were unable to recover it. 00:34:22.706 [2024-10-28 15:30:09.415528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-10-28 15:30:09.415557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.706 qpair failed and we were unable to recover it. 00:34:22.706 [2024-10-28 15:30:09.415699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-10-28 15:30:09.415725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.706 qpair failed and we were unable to recover it. 00:34:22.706 [2024-10-28 15:30:09.415873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-10-28 15:30:09.415915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.706 qpair failed and we were unable to recover it. 00:34:22.706 [2024-10-28 15:30:09.416013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-10-28 15:30:09.416042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.706 qpair failed and we were unable to recover it. 00:34:22.706 [2024-10-28 15:30:09.416180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-10-28 15:30:09.416206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.706 qpair failed and we were unable to recover it. 00:34:22.706 [2024-10-28 15:30:09.416322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-10-28 15:30:09.416348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.706 qpair failed and we were unable to recover it. 00:34:22.706 [2024-10-28 15:30:09.416487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.706 [2024-10-28 15:30:09.416526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.706 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.416644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.416686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.416788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.416815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.417005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.417034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.417168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.417193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.417344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.417385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.417551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.417579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.417718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.417744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.417934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.417963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.418064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.418093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.418229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.418255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.418359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.418384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.418570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.418598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.418754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.418795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.418964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.419002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.419160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.419189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.419321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.419361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.419505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.419547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.419715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.419742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.419855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.419881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.420014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.420057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.420176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.420205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.420429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.420454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.420616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.420644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.420795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.420824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.421005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.421030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.421206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.421235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.421342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.421370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.421521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.421552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.421702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.421754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.421941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.421970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.422082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.422108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.422299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.707 [2024-10-28 15:30:09.422341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.707 qpair failed and we were unable to recover it. 00:34:22.707 [2024-10-28 15:30:09.422519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.422547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.422680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.422707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.422844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.422869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.423045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.423074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.423220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.423244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.423383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.423425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.423588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.423617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.423793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.423818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.423971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.424000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.424188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.424217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.424342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.424382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.424486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.424510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.424705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.424731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.424890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.424916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.425076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.425105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.425266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.425295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.425435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.425473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.425600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.425626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.425777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.425806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.425934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.425976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.426058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.426084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.426269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.426298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.426423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.426464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.426611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.426657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.426793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.426821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.427003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.427027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.427196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.427224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.427330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.427359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.427472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.427496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.427676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.427702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.427863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.427892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.428043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.428068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.428239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.428282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.708 [2024-10-28 15:30:09.428456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.708 [2024-10-28 15:30:09.428485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.708 qpair failed and we were unable to recover it. 00:34:22.709 [2024-10-28 15:30:09.428600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.709 [2024-10-28 15:30:09.428642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.709 qpair failed and we were unable to recover it. 00:34:22.709 [2024-10-28 15:30:09.428791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.709 [2024-10-28 15:30:09.428837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.709 qpair failed and we were unable to recover it. 00:34:22.709 [2024-10-28 15:30:09.428970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.709 [2024-10-28 15:30:09.429010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.709 qpair failed and we were unable to recover it. 00:34:22.709 [2024-10-28 15:30:09.429174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.709 [2024-10-28 15:30:09.429198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.709 qpair failed and we were unable to recover it. 00:34:22.709 [2024-10-28 15:30:09.429329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.709 [2024-10-28 15:30:09.429378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.709 qpair failed and we were unable to recover it. 00:34:22.709 [2024-10-28 15:30:09.429530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.709 [2024-10-28 15:30:09.429559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.709 qpair failed and we were unable to recover it. 00:34:22.709 [2024-10-28 15:30:09.429704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.709 [2024-10-28 15:30:09.429731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.709 qpair failed and we were unable to recover it. 00:34:22.709 [2024-10-28 15:30:09.429867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.709 [2024-10-28 15:30:09.429894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.709 qpair failed and we were unable to recover it. 00:34:22.709 [2024-10-28 15:30:09.430106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.709 [2024-10-28 15:30:09.430134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.709 qpair failed and we were unable to recover it. 00:34:22.709 [2024-10-28 15:30:09.430275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.709 [2024-10-28 15:30:09.430303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.709 qpair failed and we were unable to recover it. 00:34:22.709 [2024-10-28 15:30:09.430571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.709 [2024-10-28 15:30:09.430600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.709 qpair failed and we were unable to recover it. 00:34:22.709 [2024-10-28 15:30:09.430763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.709 [2024-10-28 15:30:09.430790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.709 qpair failed and we were unable to recover it. 00:34:22.709 [2024-10-28 15:30:09.430957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.709 [2024-10-28 15:30:09.430983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.709 qpair failed and we were unable to recover it. 00:34:22.709 [2024-10-28 15:30:09.431154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.709 [2024-10-28 15:30:09.431183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.709 qpair failed and we were unable to recover it. 00:34:22.709 [2024-10-28 15:30:09.431321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.709 [2024-10-28 15:30:09.431354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.709 qpair failed and we were unable to recover it. 00:34:22.709 [2024-10-28 15:30:09.431584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.709 [2024-10-28 15:30:09.431609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.709 qpair failed and we were unable to recover it. 00:34:22.709 [2024-10-28 15:30:09.431796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.709 [2024-10-28 15:30:09.431831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.709 qpair failed and we were unable to recover it. 00:34:22.709 [2024-10-28 15:30:09.431956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.709 [2024-10-28 15:30:09.431985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.709 qpair failed and we were unable to recover it. 00:34:22.709 [2024-10-28 15:30:09.432184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.709 [2024-10-28 15:30:09.432209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.709 qpair failed and we were unable to recover it. 00:34:22.709 [2024-10-28 15:30:09.432404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.709 [2024-10-28 15:30:09.432432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.709 qpair failed and we were unable to recover it. 00:34:22.709 [2024-10-28 15:30:09.432529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.709 [2024-10-28 15:30:09.432558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.709 qpair failed and we were unable to recover it. 00:34:22.709 [2024-10-28 15:30:09.432736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.709 [2024-10-28 15:30:09.432761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.709 qpair failed and we were unable to recover it. 00:34:22.709 [2024-10-28 15:30:09.432879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.709 [2024-10-28 15:30:09.432920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.709 qpair failed and we were unable to recover it. 00:34:22.709 [2024-10-28 15:30:09.433094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.709 [2024-10-28 15:30:09.433123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.709 qpair failed and we were unable to recover it. 00:34:22.709 [2024-10-28 15:30:09.433239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.709 [2024-10-28 15:30:09.433266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.709 qpair failed and we were unable to recover it. 00:34:22.709 [2024-10-28 15:30:09.433528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.709 [2024-10-28 15:30:09.433557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.709 qpair failed and we were unable to recover it. 00:34:22.709 [2024-10-28 15:30:09.433681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.709 [2024-10-28 15:30:09.433711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.709 qpair failed and we were unable to recover it. 00:34:22.709 [2024-10-28 15:30:09.433886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.709 [2024-10-28 15:30:09.433916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.709 qpair failed and we were unable to recover it. 00:34:22.709 [2024-10-28 15:30:09.434132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.709 [2024-10-28 15:30:09.434161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.709 qpair failed and we were unable to recover it. 00:34:22.709 [2024-10-28 15:30:09.434363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.709 [2024-10-28 15:30:09.434392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.709 qpair failed and we were unable to recover it. 00:34:22.709 [2024-10-28 15:30:09.434556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.710 [2024-10-28 15:30:09.434581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.710 qpair failed and we were unable to recover it. 00:34:22.710 [2024-10-28 15:30:09.434696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.710 [2024-10-28 15:30:09.434723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.710 qpair failed and we were unable to recover it. 00:34:22.710 [2024-10-28 15:30:09.434849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.710 [2024-10-28 15:30:09.434878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.710 qpair failed and we were unable to recover it. 00:34:22.710 [2024-10-28 15:30:09.435021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.710 [2024-10-28 15:30:09.435061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.710 qpair failed and we were unable to recover it. 00:34:22.710 [2024-10-28 15:30:09.435186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.710 [2024-10-28 15:30:09.435226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.710 qpair failed and we were unable to recover it. 00:34:22.710 [2024-10-28 15:30:09.435398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.710 [2024-10-28 15:30:09.435428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.710 qpair failed and we were unable to recover it. 00:34:22.710 [2024-10-28 15:30:09.435601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.710 [2024-10-28 15:30:09.435627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.710 qpair failed and we were unable to recover it. 00:34:22.710 [2024-10-28 15:30:09.435799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.710 [2024-10-28 15:30:09.435828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.710 qpair failed and we were unable to recover it. 00:34:22.710 [2024-10-28 15:30:09.435989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.710 [2024-10-28 15:30:09.436018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.710 qpair failed and we were unable to recover it. 00:34:22.710 [2024-10-28 15:30:09.436142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.710 [2024-10-28 15:30:09.436182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.710 qpair failed and we were unable to recover it. 00:34:22.710 [2024-10-28 15:30:09.436341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.710 [2024-10-28 15:30:09.436397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.710 qpair failed and we were unable to recover it. 00:34:22.710 [2024-10-28 15:30:09.436529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.710 [2024-10-28 15:30:09.436566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.710 qpair failed and we were unable to recover it. 00:34:22.710 [2024-10-28 15:30:09.436733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.710 [2024-10-28 15:30:09.436759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.710 qpair failed and we were unable to recover it. 00:34:22.710 [2024-10-28 15:30:09.436872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.710 [2024-10-28 15:30:09.436898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.710 qpair failed and we were unable to recover it. 00:34:22.710 [2024-10-28 15:30:09.437045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.710 [2024-10-28 15:30:09.437074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.710 qpair failed and we were unable to recover it. 00:34:22.710 [2024-10-28 15:30:09.437222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.710 [2024-10-28 15:30:09.437262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.710 qpair failed and we were unable to recover it. 00:34:22.710 [2024-10-28 15:30:09.437373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.710 [2024-10-28 15:30:09.437413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.710 qpair failed and we were unable to recover it. 00:34:22.710 [2024-10-28 15:30:09.437580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.710 [2024-10-28 15:30:09.437609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.710 qpair failed and we were unable to recover it. 00:34:22.710 [2024-10-28 15:30:09.437777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.710 [2024-10-28 15:30:09.437803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.710 qpair failed and we were unable to recover it. 00:34:22.710 [2024-10-28 15:30:09.437915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.710 [2024-10-28 15:30:09.437941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.710 qpair failed and we were unable to recover it. 00:34:22.710 [2024-10-28 15:30:09.438131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.710 [2024-10-28 15:30:09.438159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.710 qpair failed and we were unable to recover it. 00:34:22.710 [2024-10-28 15:30:09.438341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.710 [2024-10-28 15:30:09.438373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.710 qpair failed and we were unable to recover it. 00:34:22.710 [2024-10-28 15:30:09.438523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.710 [2024-10-28 15:30:09.438552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.710 qpair failed and we were unable to recover it. 00:34:22.710 [2024-10-28 15:30:09.438721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.710 [2024-10-28 15:30:09.438747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.710 qpair failed and we were unable to recover it. 00:34:22.710 [2024-10-28 15:30:09.438899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.710 [2024-10-28 15:30:09.438933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.710 qpair failed and we were unable to recover it. 00:34:22.710 [2024-10-28 15:30:09.439064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.710 [2024-10-28 15:30:09.439088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.710 qpair failed and we were unable to recover it. 00:34:22.710 [2024-10-28 15:30:09.439263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.710 [2024-10-28 15:30:09.439291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.710 qpair failed and we were unable to recover it. 00:34:22.710 [2024-10-28 15:30:09.439448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.710 [2024-10-28 15:30:09.439472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.710 qpair failed and we were unable to recover it. 00:34:22.710 [2024-10-28 15:30:09.439728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.710 [2024-10-28 15:30:09.439758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.710 qpair failed and we were unable to recover it. 00:34:22.710 [2024-10-28 15:30:09.439900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.710 [2024-10-28 15:30:09.439929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.710 qpair failed and we were unable to recover it. 00:34:22.710 [2024-10-28 15:30:09.440105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.710 [2024-10-28 15:30:09.440129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.711 qpair failed and we were unable to recover it. 00:34:22.711 [2024-10-28 15:30:09.440277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.711 [2024-10-28 15:30:09.440303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.711 qpair failed and we were unable to recover it. 00:34:22.711 [2024-10-28 15:30:09.440498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.711 [2024-10-28 15:30:09.440528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.711 qpair failed and we were unable to recover it. 00:34:22.711 [2024-10-28 15:30:09.440643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.711 [2024-10-28 15:30:09.440702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.711 qpair failed and we were unable to recover it. 00:34:22.711 [2024-10-28 15:30:09.440808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.711 [2024-10-28 15:30:09.440835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.711 qpair failed and we were unable to recover it. 00:34:22.711 [2024-10-28 15:30:09.440980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.711 [2024-10-28 15:30:09.441009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.711 qpair failed and we were unable to recover it. 00:34:22.711 [2024-10-28 15:30:09.441192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.711 [2024-10-28 15:30:09.441225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.711 qpair failed and we were unable to recover it. 00:34:22.711 [2024-10-28 15:30:09.441374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.711 [2024-10-28 15:30:09.441404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.711 qpair failed and we were unable to recover it. 00:34:22.711 [2024-10-28 15:30:09.441548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.711 [2024-10-28 15:30:09.441576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.711 qpair failed and we were unable to recover it. 00:34:22.711 [2024-10-28 15:30:09.441750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.711 [2024-10-28 15:30:09.441777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.711 qpair failed and we were unable to recover it. 00:34:22.711 [2024-10-28 15:30:09.441942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.711 [2024-10-28 15:30:09.441971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.711 qpair failed and we were unable to recover it. 00:34:22.711 [2024-10-28 15:30:09.442106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.711 [2024-10-28 15:30:09.442134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.711 qpair failed and we were unable to recover it. 00:34:22.711 [2024-10-28 15:30:09.442307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.711 [2024-10-28 15:30:09.442331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.711 qpair failed and we were unable to recover it. 00:34:22.711 [2024-10-28 15:30:09.442555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.711 [2024-10-28 15:30:09.442583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.711 qpair failed and we were unable to recover it. 00:34:22.711 [2024-10-28 15:30:09.442710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.711 [2024-10-28 15:30:09.442744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.711 qpair failed and we were unable to recover it. 00:34:22.711 [2024-10-28 15:30:09.442881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.711 [2024-10-28 15:30:09.442922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.711 qpair failed and we were unable to recover it. 00:34:22.711 [2024-10-28 15:30:09.443096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.711 [2024-10-28 15:30:09.443125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.711 qpair failed and we were unable to recover it. 00:34:22.711 [2024-10-28 15:30:09.443301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.711 [2024-10-28 15:30:09.443330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.711 qpair failed and we were unable to recover it. 00:34:22.711 [2024-10-28 15:30:09.443498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.711 [2024-10-28 15:30:09.443527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.711 qpair failed and we were unable to recover it. 00:34:22.711 [2024-10-28 15:30:09.443681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.711 [2024-10-28 15:30:09.443722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.711 qpair failed and we were unable to recover it. 00:34:22.711 [2024-10-28 15:30:09.443845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.711 [2024-10-28 15:30:09.443885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.711 qpair failed and we were unable to recover it. 00:34:22.711 [2024-10-28 15:30:09.444056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.711 [2024-10-28 15:30:09.444084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.711 qpair failed and we were unable to recover it. 00:34:22.711 [2024-10-28 15:30:09.444210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.711 [2024-10-28 15:30:09.444239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.711 qpair failed and we were unable to recover it. 00:34:22.711 [2024-10-28 15:30:09.444380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.711 [2024-10-28 15:30:09.444408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.711 qpair failed and we were unable to recover it. 00:34:22.711 [2024-10-28 15:30:09.444518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.711 [2024-10-28 15:30:09.444545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.711 qpair failed and we were unable to recover it. 00:34:22.711 [2024-10-28 15:30:09.444719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.711 [2024-10-28 15:30:09.444765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.711 qpair failed and we were unable to recover it. 00:34:22.711 [2024-10-28 15:30:09.444903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.711 [2024-10-28 15:30:09.444934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.711 qpair failed and we were unable to recover it. 00:34:22.711 [2024-10-28 15:30:09.445108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.711 [2024-10-28 15:30:09.445133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.711 qpair failed and we were unable to recover it. 00:34:22.711 [2024-10-28 15:30:09.445292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.711 [2024-10-28 15:30:09.445326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.711 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.445455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.445484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.445663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.445691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.445787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.445829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.445996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.446025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.446227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.446268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.446419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.446449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.446580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.446609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.446750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.446777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.446964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.446994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.447133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.447162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.447280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.447314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.447441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.447468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.447642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.447684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.447860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.447902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.448008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.448049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.448229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.448259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.448456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.448481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.448692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.448738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.448902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.448929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.449109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.449139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.449350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.449380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.449508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.449537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.449675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.449702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.449840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.449866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.450039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.450068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.450228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.450255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.450430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.450460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.450639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.450680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.450862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.450894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.451037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.451066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.451238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.451267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.451387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.451431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.451542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.451568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.451746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.451776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.712 qpair failed and we were unable to recover it. 00:34:22.712 [2024-10-28 15:30:09.451926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.712 [2024-10-28 15:30:09.451976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.452120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.452174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.452363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.452393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.452563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.452604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.452783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.452813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.452946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.452985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.453141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.453167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.453312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.453355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.453522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.453552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.453692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.453720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.453841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.453869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.453976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.454005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.454170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.454218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.454401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.454430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.454571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.454601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.454739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.454772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.454892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.454919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.455079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.455111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.455254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.455299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.455461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.455490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.455702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.455729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.455915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.455941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.456148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.456178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.456343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.456373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.456488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.456513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.456689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.456738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.456867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.456903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.457077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.457108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.457301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.457337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.457444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.457473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.457661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.457690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.457833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.457862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.458069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.458106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.458332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.713 [2024-10-28 15:30:09.458358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.713 qpair failed and we were unable to recover it. 00:34:22.713 [2024-10-28 15:30:09.458577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.458606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.458757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.458787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.458954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.458980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.459215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.459244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.459397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.459427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.459544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.459594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.459765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.459810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.459987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.460017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.460197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.460222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.460336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.460379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.460555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.460584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.460698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.460741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.460845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.460871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.461028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.461057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.461176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.461226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.461387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.461429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.461570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.461599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.461711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.461748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.461867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.461898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.462050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.462081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.462245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.462272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.462415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.462459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.462598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.462636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.462779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.462805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.462979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.463024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.463197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.463227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.463464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.463507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.463765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.463796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.464051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.464084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.464233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.464258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.464512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.464544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.464682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.464718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.464838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.464865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.465038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.465085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.714 [2024-10-28 15:30:09.465230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.714 [2024-10-28 15:30:09.465259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.714 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.465403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.465445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.465591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.465618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.465822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.465852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.465982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.466023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.466180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.466221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.466331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.466362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.466502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.466532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.466727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.466758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.466923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.466953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.467113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.467141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.467259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.467300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.467401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.467431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.467607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.467637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.467832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.467859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.468005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.468035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.468193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.468219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.468423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.468453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.468678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.468722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.468822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.468848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.469045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.469070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.469257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.469287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.469409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.469451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.469597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.469623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.469823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.469853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.470034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.470060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.470210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.470249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.470378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.470407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.470589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.470616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.470828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.470864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.471002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.471031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.471147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.471187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.471315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.471342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.471482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.471512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.471658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.471687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.471796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.471823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.472037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.472082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.472250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.715 [2024-10-28 15:30:09.472282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.715 qpair failed and we were unable to recover it. 00:34:22.715 [2024-10-28 15:30:09.472396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.716 [2024-10-28 15:30:09.472439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.716 qpair failed and we were unable to recover it. 00:34:22.716 [2024-10-28 15:30:09.472597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.716 [2024-10-28 15:30:09.472626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.716 qpair failed and we were unable to recover it. 00:34:22.716 [2024-10-28 15:30:09.472793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.716 [2024-10-28 15:30:09.472819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.716 qpair failed and we were unable to recover it. 00:34:22.716 [2024-10-28 15:30:09.472913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.716 [2024-10-28 15:30:09.472939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:22.716 qpair failed and we were unable to recover it. 00:34:22.716 [2024-10-28 15:30:09.473052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.716 [2024-10-28 15:30:09.473087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.716 qpair failed and we were unable to recover it. 00:34:22.716 [2024-10-28 15:30:09.473228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.716 [2024-10-28 15:30:09.473254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.716 qpair failed and we were unable to recover it. 00:34:22.716 [2024-10-28 15:30:09.473443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.716 [2024-10-28 15:30:09.473487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.716 qpair failed and we were unable to recover it. 00:34:22.716 [2024-10-28 15:30:09.473605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.716 [2024-10-28 15:30:09.473635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.716 qpair failed and we were unable to recover it. 00:34:22.716 [2024-10-28 15:30:09.473798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.716 [2024-10-28 15:30:09.473826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.716 qpair failed and we were unable to recover it. 00:34:22.716 [2024-10-28 15:30:09.473959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.716 [2024-10-28 15:30:09.474000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.716 qpair failed and we were unable to recover it. 00:34:22.716 [2024-10-28 15:30:09.474165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.716 [2024-10-28 15:30:09.474203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.716 qpair failed and we were unable to recover it. 00:34:22.716 [2024-10-28 15:30:09.474344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.716 [2024-10-28 15:30:09.474370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.716 qpair failed and we were unable to recover it. 00:34:22.716 [2024-10-28 15:30:09.474483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.716 [2024-10-28 15:30:09.474510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.716 qpair failed and we were unable to recover it. 00:34:22.716 [2024-10-28 15:30:09.474684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.716 [2024-10-28 15:30:09.474726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.716 qpair failed and we were unable to recover it. 00:34:22.716 [2024-10-28 15:30:09.474838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.716 [2024-10-28 15:30:09.474865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.716 qpair failed and we were unable to recover it. 00:34:22.716 [2024-10-28 15:30:09.475043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.716 [2024-10-28 15:30:09.475083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.716 qpair failed and we were unable to recover it. 00:34:22.716 [2024-10-28 15:30:09.475212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.716 [2024-10-28 15:30:09.475240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.716 qpair failed and we were unable to recover it. 00:34:22.716 [2024-10-28 15:30:09.475414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.716 [2024-10-28 15:30:09.475441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.716 qpair failed and we were unable to recover it. 00:34:22.716 [2024-10-28 15:30:09.475613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.716 [2024-10-28 15:30:09.475643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.716 qpair failed and we were unable to recover it. 00:34:22.716 [2024-10-28 15:30:09.475832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.716 [2024-10-28 15:30:09.475862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.716 qpair failed and we were unable to recover it. 00:34:22.716 [2024-10-28 15:30:09.476013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.716 [2024-10-28 15:30:09.476038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.716 qpair failed and we were unable to recover it. 00:34:22.716 [2024-10-28 15:30:09.476189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.716 [2024-10-28 15:30:09.476231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.716 qpair failed and we were unable to recover it. 00:34:22.716 [2024-10-28 15:30:09.476372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.716 [2024-10-28 15:30:09.476402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.716 qpair failed and we were unable to recover it. 00:34:22.716 [2024-10-28 15:30:09.476541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.716 [2024-10-28 15:30:09.476568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.716 qpair failed and we were unable to recover it. 00:34:22.716 [2024-10-28 15:30:09.476701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.716 [2024-10-28 15:30:09.476729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.476880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.476910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.477091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.477116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.477227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.477280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.477422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.477451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.477583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.477609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.477732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.477759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.477856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.477882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.478003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.478029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.478217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.478246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.478397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.478426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.478548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.478589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.478778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.478805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.478950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.478979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.479135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.479159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.479270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.479300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.479455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.479484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.479614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.479662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.479797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.479841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.479967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.479997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.480132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.480172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.480319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.480361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.480524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.480553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.480710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.480737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.480906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.480935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.481070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.481099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.481281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.481306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.481490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.481518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.481626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.481660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.481785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.481811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.481959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.481984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.482133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.482162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.482289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.482314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.482444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.482470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.482620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.482654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.482791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.482817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.482959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.482984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.483174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.483203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.483385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.717 [2024-10-28 15:30:09.483425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.717 qpair failed and we were unable to recover it. 00:34:22.717 [2024-10-28 15:30:09.483581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.718 [2024-10-28 15:30:09.483610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.718 qpair failed and we were unable to recover it. 00:34:22.718 [2024-10-28 15:30:09.483755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.718 [2024-10-28 15:30:09.483784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.718 qpair failed and we were unable to recover it. 00:34:22.718 [2024-10-28 15:30:09.483928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.718 [2024-10-28 15:30:09.483968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.718 qpair failed and we were unable to recover it. 00:34:22.718 [2024-10-28 15:30:09.484119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.718 [2024-10-28 15:30:09.484161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.718 qpair failed and we were unable to recover it. 00:34:22.718 [2024-10-28 15:30:09.484328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.718 [2024-10-28 15:30:09.484357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.718 qpair failed and we were unable to recover it. 00:34:22.718 [2024-10-28 15:30:09.484527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.718 [2024-10-28 15:30:09.484553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.718 qpair failed and we were unable to recover it. 00:34:22.718 [2024-10-28 15:30:09.484725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.718 [2024-10-28 15:30:09.484752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.718 qpair failed and we were unable to recover it. 00:34:22.718 [2024-10-28 15:30:09.484878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.718 [2024-10-28 15:30:09.484903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.718 qpair failed and we were unable to recover it. 00:34:22.718 [2024-10-28 15:30:09.485007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.718 [2024-10-28 15:30:09.485033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.718 qpair failed and we were unable to recover it. 00:34:22.718 [2024-10-28 15:30:09.485162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.718 [2024-10-28 15:30:09.485188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.718 qpair failed and we were unable to recover it. 00:34:22.718 [2024-10-28 15:30:09.485353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.718 [2024-10-28 15:30:09.485382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.718 qpair failed and we were unable to recover it. 00:34:22.718 [2024-10-28 15:30:09.485527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.718 [2024-10-28 15:30:09.485561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.718 qpair failed and we were unable to recover it. 00:34:22.718 [2024-10-28 15:30:09.485708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.718 [2024-10-28 15:30:09.485755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.718 qpair failed and we were unable to recover it. 00:34:22.718 [2024-10-28 15:30:09.485939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.718 [2024-10-28 15:30:09.485969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.718 qpair failed and we were unable to recover it. 00:34:22.718 [2024-10-28 15:30:09.486144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.718 [2024-10-28 15:30:09.486168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.718 qpair failed and we were unable to recover it. 00:34:22.718 [2024-10-28 15:30:09.486324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.718 [2024-10-28 15:30:09.486352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.718 qpair failed and we were unable to recover it. 00:34:22.718 [2024-10-28 15:30:09.486516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.718 [2024-10-28 15:30:09.486549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.718 qpair failed and we were unable to recover it. 00:34:22.718 [2024-10-28 15:30:09.486715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.718 [2024-10-28 15:30:09.486742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.718 qpair failed and we were unable to recover it. 00:34:22.718 [2024-10-28 15:30:09.486863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.718 [2024-10-28 15:30:09.486889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.718 qpair failed and we were unable to recover it. 00:34:22.718 [2024-10-28 15:30:09.487015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.718 [2024-10-28 15:30:09.487041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.718 qpair failed and we were unable to recover it. 00:34:22.718 [2024-10-28 15:30:09.487190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.718 [2024-10-28 15:30:09.487215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.718 qpair failed and we were unable to recover it. 00:34:22.718 [2024-10-28 15:30:09.487399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.718 [2024-10-28 15:30:09.487428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.718 qpair failed and we were unable to recover it. 00:34:22.718 [2024-10-28 15:30:09.487552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.718 [2024-10-28 15:30:09.487580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.718 qpair failed and we were unable to recover it. 00:34:22.718 [2024-10-28 15:30:09.487769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.718 [2024-10-28 15:30:09.487796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.718 qpair failed and we were unable to recover it. 00:34:22.718 [2024-10-28 15:30:09.487981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.718 [2024-10-28 15:30:09.488009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.718 qpair failed and we were unable to recover it. 00:34:22.718 [2024-10-28 15:30:09.488169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.718 [2024-10-28 15:30:09.488198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.718 qpair failed and we were unable to recover it. 00:34:22.718 [2024-10-28 15:30:09.488335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.718 [2024-10-28 15:30:09.488361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.718 qpair failed and we were unable to recover it. 00:34:22.718 [2024-10-28 15:30:09.488565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.718 [2024-10-28 15:30:09.488591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.718 qpair failed and we were unable to recover it. 00:34:22.718 [2024-10-28 15:30:09.488781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.718 [2024-10-28 15:30:09.488808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:22.718 qpair failed and we were unable to recover it. 00:34:23.005 [2024-10-28 15:30:09.488984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.005 [2024-10-28 15:30:09.489010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.005 qpair failed and we were unable to recover it. 00:34:23.005 [2024-10-28 15:30:09.489156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.005 [2024-10-28 15:30:09.489183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.005 qpair failed and we were unable to recover it. 00:34:23.005 [2024-10-28 15:30:09.489416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.005 [2024-10-28 15:30:09.489442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.005 qpair failed and we were unable to recover it. 00:34:23.005 [2024-10-28 15:30:09.489527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.005 [2024-10-28 15:30:09.489553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.005 qpair failed and we were unable to recover it. 00:34:23.005 [2024-10-28 15:30:09.489698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.005 [2024-10-28 15:30:09.489725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.005 qpair failed and we were unable to recover it. 00:34:23.005 [2024-10-28 15:30:09.489845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.005 [2024-10-28 15:30:09.489874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.005 qpair failed and we were unable to recover it. 00:34:23.005 [2024-10-28 15:30:09.490106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.005 [2024-10-28 15:30:09.490132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.005 qpair failed and we were unable to recover it. 00:34:23.005 [2024-10-28 15:30:09.490297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.005 [2024-10-28 15:30:09.490325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.005 qpair failed and we were unable to recover it. 00:34:23.005 [2024-10-28 15:30:09.490467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.005 [2024-10-28 15:30:09.490495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.005 qpair failed and we were unable to recover it. 00:34:23.005 [2024-10-28 15:30:09.490669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.005 [2024-10-28 15:30:09.490695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.005 qpair failed and we were unable to recover it. 00:34:23.005 [2024-10-28 15:30:09.490889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.005 [2024-10-28 15:30:09.490919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.005 qpair failed and we were unable to recover it. 00:34:23.005 [2024-10-28 15:30:09.491051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.005 [2024-10-28 15:30:09.491091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.005 qpair failed and we were unable to recover it. 00:34:23.005 [2024-10-28 15:30:09.491240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.005 [2024-10-28 15:30:09.491265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.005 qpair failed and we were unable to recover it. 00:34:23.005 [2024-10-28 15:30:09.491387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.005 [2024-10-28 15:30:09.491413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.005 qpair failed and we were unable to recover it. 00:34:23.005 [2024-10-28 15:30:09.491591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.005 [2024-10-28 15:30:09.491620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.005 qpair failed and we were unable to recover it. 00:34:23.005 [2024-10-28 15:30:09.491818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.005 [2024-10-28 15:30:09.491844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.005 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.491934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.491976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.492113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.492141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.492317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.492343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.492513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.492541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.492738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.492767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.492878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.492905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.493091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.493135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.493316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.493344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.493450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.493476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.493601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.493628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.493796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.493826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.493999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.494030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.494147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.494201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.494311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.494339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.494529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.494558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.494655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.494699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.494852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.494878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.495044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.495070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.495173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.495218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.495341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.495370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.495498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.495538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.495648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.495693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.495866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.495894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.496072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.496096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.496254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.496284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.496397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.496426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.496547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.496572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.496696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.496722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.496827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.496856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.497036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.497061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.497215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.497244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.497371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.497399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.497510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.497535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.497693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.497720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.497841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.497870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.498054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.498079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.498195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.498235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.498359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.498388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.498549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.498590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.498736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.498778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.498908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.498937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.499033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.499059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.499272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.499297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.499501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.499530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.499641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.499672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.499796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.499822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.499947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.499976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.500180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.500205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.500332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.500360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.500528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.500558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.500678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.500705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.500804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.500835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.500949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.500979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.501113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.501154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.501300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.501325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.501442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.501470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.501586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.501611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.501731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.501757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.501864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.501892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.502013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.502039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.502204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.502228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.502338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.502367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.502503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.502529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.502677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.502719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.502858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.502886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.503051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.503075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.503243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.503272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.503373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.503408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.503589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.503617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.503755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.503782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.503918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.503963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.504139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.504164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.504337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.504375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.504546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.504586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.504735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.504761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.504851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.006 [2024-10-28 15:30:09.504877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.006 qpair failed and we were unable to recover it. 00:34:23.006 [2024-10-28 15:30:09.504988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.505017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.505150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.505189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.505364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.505400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.505591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.505619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.505744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.505771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.505866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.505892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.506061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.506089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.506219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.506258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.506405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.506447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.506581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.506610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.506731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.506758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.506853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.506880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.507009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.507038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.507163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.507203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.507350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.507402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.507515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.507548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.507699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.507725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.507833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.507859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.507979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.508008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.508172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.508197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.508303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.508330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.508501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.508529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.508641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.508682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.508808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.508835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.508973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.509001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.509152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.509177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.509322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.509348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.509527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.509557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.509693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.509719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.509828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.509854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.510004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.510033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.510150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.510176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.510330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.510356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.510500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.510552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.510720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.510749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.510852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.510879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.511016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.511045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.511153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.511178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.511335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.511360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.511568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.511598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.511729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.511757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.511856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.511882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.512006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.512039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.512173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.512198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.512409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.512438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.512588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.512618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.512737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.512764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.512868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.512894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.513013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.513041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.513168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.513204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.513340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.513382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.513486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.513517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.513635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.513685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.513786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.513813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.513967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.513997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.514125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.514151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.514358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.514387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.514572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.514603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.514726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.514754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.514851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.514877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.514977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.515006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.515170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.515195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.515399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.515427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.515667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.515697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.515842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.515869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.515997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.516039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.516190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.516218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.516395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.516419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.516572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.516601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.516734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.516769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.516889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.516916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.517047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.517088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.517253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.517293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.517432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.517474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.517669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.517702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.007 [2024-10-28 15:30:09.517829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.007 [2024-10-28 15:30:09.517858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.007 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.518031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.518055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.518186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.518229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.518358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.518386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.518552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.518577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.518684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.518710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.518839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.518867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.519010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.519050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.519201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.519251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.519423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.519452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.519603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.519627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.519729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.519755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.519855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.519881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.520090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.520129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.520328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.520366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.520531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.520560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.520663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.520689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.520830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.520857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.521022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.521055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.521157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.521181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.521317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.521342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.521509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.521542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.521679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.521707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.521806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.521832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.521943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.521980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.522146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.522172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.522379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.522407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.522570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.522598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.522740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.522766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.522860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.522889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.523019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.523048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.523209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.523234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.523395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.523423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.523518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.523547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.523684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.523726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.523821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.523848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.523969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.523998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.524153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.524179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.524356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.524384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.524506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.524534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.524663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.524704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.524786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.524812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.524894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.524920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.525049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.525073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.525186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.525211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.525350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.525379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.525493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.525519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.525673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.525700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.525810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.525843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.525998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.526040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.526226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.526255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.526360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.526389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.526515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.526558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.526646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.526694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.526789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.526814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.526922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.526962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.527065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.527091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.527212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.527240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.527401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.527426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.527562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.527602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.527710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.527739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.527840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.527866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.528017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.528057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.528205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.528237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.528358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.528384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.528533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.528561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.528685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.528715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.528835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.528862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.528993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.529020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.529192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.529220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.008 qpair failed and we were unable to recover it. 00:34:23.008 [2024-10-28 15:30:09.529341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.008 [2024-10-28 15:30:09.529367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.529454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.529480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.529621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.529672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.529813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.529841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.529993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.530036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.530164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.530200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.530324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.530349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.530460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.530486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.530584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.530613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.530758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.530785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.530868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.530895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.531064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.531093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.531213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.531239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.531375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.531402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.531509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.531539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.531684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.531711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.531818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.531845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.531963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.531993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.532102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.532128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.532240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.532266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.532407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.532435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.532555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.532597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.532733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.532773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.532889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.532916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.533064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.533105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.533286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.533316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.533429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.533458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.533587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.533614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.533743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.533771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.533923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.533967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.534152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.534178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.534320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.534363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.534527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.534556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.534662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.534690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.534798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.534825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.534967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.534995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.535132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.535173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.535349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.535378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.535540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.535569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.535685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.535712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.535849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.535876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.535998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.536026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.536154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.536194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.536379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.536408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.536528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.536557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.536667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.536698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.536828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.536854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.537019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.537048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.537211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.537236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.537410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.537439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.537570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.537599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.537745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.537772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.537894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.537920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.538084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.538114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.538282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.538306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.538474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.538503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.538599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.538628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.538773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.538800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.538925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.538951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.539096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.539125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.539224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.539250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.539386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.539411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.539550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.539579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.539719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.539747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.539867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.539893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.540030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.540059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.540169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.540210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.540336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.540362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.540498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.540527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.540656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.540684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.540789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.540815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.540905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.540947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.541127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.541157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.541313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.541341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.541471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.541500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.541622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.541648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.541767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.541793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.009 [2024-10-28 15:30:09.541881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.009 [2024-10-28 15:30:09.541907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.009 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.542103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.542128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.542275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.542304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.542460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.542489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.542613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.542639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.542770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.542797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.542953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.542982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.543120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.543144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.543307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.543350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.543485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.543514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.543662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.543689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.543791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.543817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.543951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.543981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.544113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.544153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.544294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.544319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.544459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.544488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.544621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.544648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.544779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.544806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.544977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.545006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.545126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.545152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.545292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.545318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.545435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.545464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.545575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.545601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.545751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.545779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.545947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.545976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.546114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.546154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.546331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.546360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.546493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.546521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.546673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.546699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.546865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.546894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.547017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.547046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.547168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.547193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.547316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.547342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.547476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.547516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.547661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.547687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.547782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.547812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.547951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.547980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.548107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.548148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.548282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.548308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.548449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.548478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.548604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.548648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.548797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.548824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.548905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.548947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.549108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.549149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.549296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.549326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.549461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.549490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.549641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.549683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.549852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.549878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.550019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.550048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.550171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.550197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.550331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.550357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.550535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.550564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.550674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.550700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.550831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.550857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.551024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.551053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.551164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.551189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.551320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.551346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.551440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.551470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.551603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.551629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.551731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.551757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.551865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.551894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.552016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.552041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.552208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.552251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.552378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.552408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.552510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.552535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.552702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.552729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.552817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.552844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.552958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.552984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.553121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.553164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.553260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.553289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.553415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.553456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.010 [2024-10-28 15:30:09.553553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.010 [2024-10-28 15:30:09.553579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.010 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.553750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.553780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.553892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.553918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.554020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.554046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.554165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.554198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.554340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.554365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.554467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.554492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.554640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.554692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.554824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.554850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.555006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.555047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.555144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.555173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.555292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.555318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.555443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.555469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.555642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.555678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.555785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.555811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.555968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.555993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.556137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.556166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.556269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.556294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.556466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.556492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.556640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.556676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.556789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.556815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.556990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.557015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.557151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.557180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.557311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.557337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.557479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.557520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.557697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.557724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.557818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.557859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.557995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.558020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.558154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.558183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.558288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.558313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.558447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.558473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.558628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.558665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.558794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.558820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.558941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.558967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.559116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.559144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.559279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.559304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.559409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.559435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.559596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.559625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.559737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.559778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.559877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.559903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.560063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.560092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.560224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.560249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.560398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.560438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.560566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.560596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.560736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.560767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.560890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.560916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.561059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.561088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.561215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.561256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.561361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.561402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.561549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.561578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.561738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.561764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.561911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.561952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.562102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.562134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.562259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.562285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.562429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.562456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.562572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.562602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.562756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.562789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.562959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.562990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.563139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.563168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.563359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.563385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.563562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.563591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.563746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.563773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.563898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.563934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.564130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.564159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.564345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.564375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.564547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.564573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.564759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.564790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.564952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.564982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.565088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.565116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.565271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.565299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.565465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.565494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.565642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.565694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.565796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.565822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.566011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.566048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.566216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.566242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.566448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.566479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.566665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.566709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.011 [2024-10-28 15:30:09.566845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.011 [2024-10-28 15:30:09.566874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.011 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.567064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.567096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.567271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.567302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.567446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.567472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.567622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.567678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.567828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.567858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.568011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.568037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.568222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.568257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.568390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.568419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.568538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.568584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.568757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.568784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.568950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.568976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.569171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.569197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.569348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.569377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.569523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.569553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.569740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.569767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.569945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.569971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.570129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.570159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.570287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.570329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.570490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.570519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.570670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.570713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.570819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.570845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.571003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.571047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.571209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.571240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.571396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.571422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.571533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.571574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.571758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.571790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.571968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.571993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.572157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.572187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.572321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.572360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.572538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.572563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.572750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.572781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.572942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.572977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.573093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.573136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.573334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.573363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.573534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.573565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.573692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.573724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.573849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.573876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.574042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.574072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.574245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.574277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.574444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.574484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.574621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.574659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.574792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.574818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.574945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.574971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.575127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.575156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.575363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.575412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.575558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.575587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.575728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.575759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.575885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.575911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.576012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.576038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.576210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.576239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.576440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.576470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.576635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.576680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.576820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.576849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.577008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.577034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.577184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.577228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.577357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.577387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.577492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.577518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.577674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.577717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.577826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.577856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.578033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.578059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.578229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.578260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.578388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.578418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.578579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.578619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.578795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.578826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.578993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.579024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.579164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.579197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.579317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.579344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.579542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.579573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.579702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.579731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.579878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.579908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.580087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.012 [2024-10-28 15:30:09.580117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.012 qpair failed and we were unable to recover it. 00:34:23.012 [2024-10-28 15:30:09.580290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.580317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.580462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.580494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.580662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.580709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.580834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.580880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.581062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.581098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.581243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.581273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.581403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.581428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.581634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.581677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.581833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.581863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.582043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.582069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.582159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.582192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.582400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.582431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.582562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.582589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.582821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.582860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.583026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.583063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.583241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.583276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.583440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.583475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.583632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.583675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.583789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.583815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.583992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.584019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.584149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.584180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.584342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.584369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.584495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.584545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.584717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.584749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.584888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.584927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.585107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.585138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.585292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.585329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.585442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.585484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.585677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.585734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.585906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.585948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.586117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.586158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.586311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.586349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.586493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.586523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.586656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.586698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.586865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.586902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.587039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.587065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.587201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.587227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.587376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.587402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.587530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.587557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.587683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.587711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.587846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.587896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.588018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.588048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.588194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.588220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.588325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.588350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.588472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.588501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.588629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.588661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.588816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.588860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.589049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.589079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.589199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.589225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.589376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.589403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.589572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.589601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.589710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.589735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.589871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.589897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.590037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.590067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.590217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.590241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.590438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.590472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.590641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.590693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.590824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.590851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.591073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.591103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.591257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.591287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.591468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.591492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.591644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.591687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.591803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.591833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.591950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.591991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.592153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.592193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.592310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.592340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.592501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.592542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.592669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.592711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.592864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.592894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.593036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.593075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.593278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.593308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.593472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.593501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.593616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.593662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.593802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.593844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.593967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.593998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.594105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.013 [2024-10-28 15:30:09.594131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.013 qpair failed and we were unable to recover it. 00:34:23.013 [2024-10-28 15:30:09.594302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.594328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.594489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.594518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.594676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.594703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.594886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.594916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.595099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.595129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.595286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.595311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.595492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.595535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.595712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.595739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.595869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.595896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.596027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.596053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.596217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.596258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.596421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.596446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.596589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.596631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.596776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.596806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.596937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.596978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.597170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.597200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.597294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.597325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.597470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.597495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.597662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.597689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.597841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.597876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.598020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.598071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.598262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.598292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.598401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.598430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.598656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.598682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.598828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.598857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.599015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.599056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.599233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.599257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.599443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.599474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.599629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.599666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.599828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.599860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.599989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.600030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.600172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.600201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.600325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.600351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.600521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.600565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.600675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.600719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.600821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.600861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.601004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.601030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.601189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.601219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.601398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.601435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.601583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.601624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.601749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.601780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.601921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.601962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.602105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.602155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.602315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.602346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.602537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.602563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.602744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.602771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.602921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.602954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.603123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.603148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.603292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.603317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.603461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.603486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.603596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.603636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.603743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.603769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.603884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.603926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.604115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.604140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.604253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.604280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.604432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.604461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.604604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.604629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.604771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.604814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.604920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.604950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.605064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.605111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.605249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.605273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.605443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.605472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.605654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.605679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.605799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.605825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.605966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.605991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.606133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.606159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.606271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.606296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.606442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.606479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.606659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.606690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.606828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.606855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.014 qpair failed and we were unable to recover it. 00:34:23.014 [2024-10-28 15:30:09.607044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.014 [2024-10-28 15:30:09.607068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.607220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.607250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.607352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.607382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.607543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.607572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.607711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.607752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.607860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.607886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.608019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.608045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.608158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.608182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.608318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.608343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.608493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.608522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.608633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.608664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.608772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.608798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.608965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.608995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.609103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.609128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.609291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.609316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.609458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.609487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.609627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.609668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.609781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.609807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.609978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.610007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.610170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.610196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.610299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.610325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.610471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.610501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.610665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.610695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.610815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.610842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.610985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.611014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.611149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.611190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.611305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.611330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.611497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.611526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.611635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.611669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.611786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.611817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.611914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.611957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.612141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.612166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.612342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.612372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.612465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.612507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.612641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.612674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.612771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.612797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.612969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.612999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.613163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.613188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.613353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.613383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.613521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.613551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.613680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.613707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.613805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.613830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.613967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.613997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.614134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.614161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.614268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.614295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.614422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.614451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.614607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.614635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.614795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.614822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.614917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.614957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.615060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.615103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.615278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.615308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.615407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.615449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.615569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.615593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.615711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.615738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.615865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.615891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.615997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.616021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.616206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.616232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.616390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.616414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.616538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.616563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.616685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.616712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.616891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.616933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.617043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.617085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.617237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.617266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.617408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.617458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.617592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.617618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.617765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.617794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.617941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.617970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.618116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.618140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.618317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.618343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.618470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.618527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.618622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.618682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.618853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.618879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.619016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.619041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.619177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.619205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.619316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.619341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.619470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.619494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.015 [2024-10-28 15:30:09.619601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.015 [2024-10-28 15:30:09.619625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.015 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.619745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.619771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.619872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.619898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.620026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.620051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.620202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.620227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.620360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.620386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.620490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.620515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.620687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.620727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.620853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.620879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.621002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.621027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.621145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.621170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.621309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.621333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.621460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.621485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.621595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.621620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.621730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.621756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.621893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.621918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.622068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.622092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.622233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.622274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.622413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.622439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.622596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.622622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.622808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.622835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.622976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.623019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.623148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.623177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.623338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.623377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.623501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.623526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.623714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.623741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.623866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.623893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.624090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.624114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.624298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.624323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.624417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.624441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.624607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.624633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.624780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.624807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.624951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.624976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.625130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.625161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.625349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.625375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.625541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.625564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.625762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.625789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.625923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.625953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.626077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.626102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.626255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.626281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.626464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.626490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.626656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.626708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.626846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.626875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.627019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.627048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.627231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.627254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.627445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.627470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.627625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.627671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.627816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.627845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.627975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.628000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.628182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.628207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.628412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.628436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.628591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.628615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.628739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.628780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.629009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.629042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.629212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.629241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.629362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.629387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.629536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.629570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.629666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.629710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.629862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.629891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.630063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.630091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.630209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.630238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.630374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.630401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.630592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.630617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.630808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.630853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.631045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.631087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.631241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.631271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.631426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.631450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.631663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.631690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.631811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.631842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.631989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.632017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.632201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.632231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.632367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.632394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.632535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.632575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.632715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.632741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.632920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.632956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.016 qpair failed and we were unable to recover it. 00:34:23.016 [2024-10-28 15:30:09.633113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.016 [2024-10-28 15:30:09.633142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.633287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.633312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.633428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.633454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.633557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.633583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.633725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.633751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.633921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.633960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.634120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.634154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.634312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.634337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.634465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.634503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.634635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.634669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.634822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.634853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.634963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.635006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.635141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.635186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.635372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.635399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.635573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.635598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.635728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.635757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.635882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.635911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.636032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.636056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.636235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.636260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.636429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.636455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.636573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.636598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.636757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.636786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.636962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.636988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.637138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.637166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.637285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.637317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.637452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.637495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.637638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.637672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.637807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.637835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.638026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.638055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3318912 Killed "${NVMF_APP[@]}" "$@" 00:34:23.017 [2024-10-28 15:30:09.638181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.638207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.638381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.638404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.638573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.638600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 15:30:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:23.017 [2024-10-28 15:30:09.638766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.638797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 15:30:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:23.017 [2024-10-28 15:30:09.638944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.638980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 15:30:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:23.017 [2024-10-28 15:30:09.639177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.639207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 15:30:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:23.017 [2024-10-28 15:30:09.639339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.639364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 15:30:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:23.017 [2024-10-28 15:30:09.639490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.639528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.639687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.639721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.639820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.639846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.639965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.639989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.640103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.640129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.640269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.640295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.640391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.640415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.640580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.640623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.640743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.640770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.640890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.640916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.641064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.641106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.641260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.641287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.641428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.641482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.641593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.641660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.641762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.641791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.641917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.641961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.642073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.642101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.642228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.642254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.642407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.642448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.642569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.642601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.642725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.642750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.642849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.642876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.642991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.643015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.643162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.643203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.643389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.643414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.643495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.643519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 [2024-10-28 15:30:09.643622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.017 [2024-10-28 15:30:09.643672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.017 qpair failed and we were unable to recover it. 00:34:23.017 15:30:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3319861 00:34:23.018 15:30:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:23.018 [2024-10-28 15:30:09.643773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.643799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 15:30:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3319861 00:34:23.018 [2024-10-28 15:30:09.643881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.643920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.644088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.644116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 15:30:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3319861 ']' 00:34:23.018 [2024-10-28 15:30:09.644236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.644264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 15:30:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:23.018 [2024-10-28 15:30:09.644367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.644396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 15:30:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:23.018 [2024-10-28 15:30:09.644550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.644576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 15:30:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:23.018 [2024-10-28 15:30:09.644725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:23.018 [2024-10-28 15:30:09.644768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 15:30:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:23.018 [2024-10-28 15:30:09.644873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.644912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 15:30:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:23.018 [2024-10-28 15:30:09.645089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.645121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.645224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.645254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.645386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.645415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.645596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.645625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.645757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.645785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.645887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.645920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.646107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.646135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.646291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.646335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.646470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.646497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.646699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.646726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.646822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.646849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.647002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.647032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.647229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.647258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.647404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.647432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.647575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.647605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.647760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.647787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.647889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.647940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.648101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.648131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.648235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.648264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.648398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.648426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.648548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.648578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.648682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.648725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.648840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.648867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.648984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.649020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.649149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.649179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.649343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.649373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.649518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.649546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.649664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.649715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.649866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.649893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.650065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.650094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.650186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.650221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.650354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.650384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.650521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.650549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.650779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.650806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.650897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.650932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.651077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.651103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.651254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.651296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.651469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.651498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.651615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.651665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.651828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.651853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.651960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.652002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.652188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.652216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.652353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.652378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.652523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.652551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.652695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.652722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.652853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.652879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.652993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.653021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.653148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.653173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.653292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.653318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.653522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.653552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.653655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.653681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.653814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.653841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.653947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.653976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.654135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.654162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.654333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.654360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.654522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.654550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.654679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.654717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.654837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.654863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.654962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.654990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.655120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.655147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.655308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.655350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.655458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.655486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.655647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.655679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.655820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.018 [2024-10-28 15:30:09.655845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.018 qpair failed and we were unable to recover it. 00:34:23.018 [2024-10-28 15:30:09.655991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.656019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.656162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.656188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.656367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.656395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.656533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.656561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.656697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.656725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.656853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.656879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.657014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.657043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.657158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.657202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.657341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.657371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.657472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.657501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.657684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.657712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.657846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.657872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.657967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.657995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.658120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.658146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.658263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.658288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.658446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.658476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.658592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.658637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.658778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.658803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.658957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.658987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.659144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.659169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.659337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.659367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.659501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.659541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.659658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.659683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.659805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.659831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.659966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.659996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.660129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.660154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.660354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.660384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.660517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.660546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.660705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.660731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.660865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.660909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.661050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.661080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.661184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.661210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.661357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.661382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.661557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.661587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.661707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.661734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.661866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.661892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.662053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.662080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.662243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.662267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.662390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.662431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.662578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.662607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.662784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.662809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.662967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.662992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.663147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.663176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.663328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.663369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.663508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.663551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.663698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.663726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.663860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.663887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.664029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.664065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.664247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.664279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.664401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.664429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.664566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.664595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.664766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.664809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.664923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.664966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.665093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.665119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.665277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.665308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.665429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.665457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.665630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.665710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.665886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.665919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.666055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.666089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.666232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.666273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.666431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.666459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.666624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.666656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.666815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.666849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.666997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.667028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.667178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.667206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.667341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.019 [2024-10-28 15:30:09.667384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.019 qpair failed and we were unable to recover it. 00:34:23.019 [2024-10-28 15:30:09.667539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.667569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.667706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.667735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.667850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.667877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.668067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.668097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.668275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.668301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.668407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.668448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.668612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.668641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.668784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.668810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.668951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.668979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.669133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.669171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.669308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.669337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.669492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.669518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.669672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.669702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.669862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.669887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.670031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.670073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.670180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.670209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.670365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.670390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.670502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.670527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.670677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.670707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.670821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.670847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.670948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.670975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.671107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.671135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.671300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.671326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.671454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.671498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.671644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.671678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.671828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.671856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.671984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.672027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.672190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.672218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.672395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.672422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.672585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.672615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.672745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.672775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.672920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.672947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.673055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.673100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.673236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.673265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.673384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.673411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.673574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.673600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.673742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.673768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.673911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.673945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.674042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.674083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.674267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.674295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.674463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.674490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.674597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.674622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.674792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.674820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.674940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.674966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.675108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.675134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.675278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.675307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.675470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.675497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.675674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.675703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.675834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.675862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.675994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.676033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.676164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.676190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.676302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.676329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.676471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.676497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.676621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.676668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.676820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.676861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.677017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.677050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.677242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.677269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.677455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.677483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.677642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.677677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.677840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.677870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.677996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.678023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.678139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.678166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.678292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.678317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.678500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.678530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.678665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.678692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.678789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.678814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.678957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.678986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.679156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.679183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.679333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.679361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.679519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.679546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.679715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.679746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.679872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.679898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.680038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.680076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.680246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.680272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.680376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.680400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.680572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.680602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.020 [2024-10-28 15:30:09.680737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.020 [2024-10-28 15:30:09.680763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.020 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.680890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.680923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.681087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.681115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.681240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.681266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.681368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.681393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.681541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.681571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.681703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.681729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.681896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.681921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.682082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.682112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.682248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.682289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.682422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.682463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.682626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.682661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.682782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.682808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.682939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.682964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.683093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.683122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.683246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.683271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.683397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.683423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.683578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.683607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.683724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.683750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.683881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.683908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.684046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.684074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.684202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.684227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.684325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.684351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.684497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.684526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.684657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.684685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.684764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.684790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.684987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.685017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.685165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.685190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.685319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.685344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.685534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.685562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.685679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.685704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.685833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.685858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.686007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.686036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.686140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.686165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.686279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.686312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.686480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.686508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.686672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.686698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.686803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.686829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.686954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.686982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.687143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.687169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.687269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.687296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.687453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.687481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.687613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.687664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.687783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.687810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.687952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.687980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.688122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.688160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.688292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.688335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.688462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.688491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.688667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.688696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.688844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.688869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.689075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.689104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.689211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.689249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.689361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.689387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.689523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.689552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.689677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.689703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.689828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.689853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.690012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.690041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.690167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.690193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.690339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.690365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.690519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.690549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.690709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.690734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.690827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.690853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.691023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.691052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.691205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.691231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.691378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.691422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.691583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.691612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.691788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.691816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.691995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.692024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.692180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.692208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.692357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.692383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.692512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.021 [2024-10-28 15:30:09.692552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.021 qpair failed and we were unable to recover it. 00:34:23.021 [2024-10-28 15:30:09.692677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.692719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.692828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.692854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.692997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.693037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.693172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.693206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.693352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.693378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.693474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.693500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.693733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.693763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.693922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.693948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.694157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.694186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.694355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.694384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.694547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.694573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.694676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.694714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.694858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.694887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.695021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.695048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.695191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.695216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.695362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.695390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.695559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.695585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.695725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.695770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.695910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.695939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.696162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.696189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.696358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.696386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.696568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.696597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.696753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.696780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.696878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.696903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.697085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.697113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.697257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.697290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.697423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.697467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.697634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.697671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.697812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.697837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.697974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.698019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.698123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.698152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.698334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.698360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.698477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.698533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.698669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.698698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.698846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.698872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.699048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.699078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.699210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.699238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.699398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.699424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.699527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.699553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.699697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.699727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.699861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.699886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.700017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.700043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.700218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.700247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.700364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.700393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.700486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.700513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.700629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.700677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.700785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.700812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.700975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.701001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.701160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.701188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.701328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.701355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.701485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.701510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.701663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.701692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.701874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.701901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.702154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.702182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.702340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.702368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.702544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.702572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.702707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.702735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.702874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.702899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.703028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.703053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.703152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.703179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.703292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.703320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.703426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.703453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.703613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.703639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.703753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.703781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.703923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.703948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.704077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.704102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.704257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.704286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.704401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.704427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.704660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.704689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.704780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.704808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.022 qpair failed and we were unable to recover it. 00:34:23.022 [2024-10-28 15:30:09.704816] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:34:23.022 [2024-10-28 15:30:09.704904] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:23.022 [2024-10-28 15:30:09.704944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.022 [2024-10-28 15:30:09.704969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.705100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.705124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.705310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.705339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.705496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.705522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.705669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.705711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.705859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.705887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.706032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.706059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.706183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.706208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.706334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.706362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.706545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.706575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.706728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.706754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.706885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.706910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.707073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.707105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.707280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.707308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.707436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.707464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.707698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.707726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.707863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.707891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.708015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.708044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.708194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.708220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.708342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.708368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.708513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.708542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.708662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.708687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.708847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.708873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.709032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.709061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.709173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.709198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.709334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.709360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.709537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.709567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.709755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.709782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.709942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.709970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.710101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.710131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.710254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.710279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.710409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.710434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.710601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.710631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.710797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.710823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.710915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.710940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.711086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.711116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.711287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.711313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.711442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.711483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.711659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.711704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.711866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.711893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.712067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.712096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.712203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.712232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.712366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.712392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.712583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.712612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.712760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.712787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.712908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.712933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.713070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.713112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.713226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.713255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.713417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.713442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.713585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.713629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.713783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.713810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.713936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.713962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.714102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.714151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.714300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.714328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.714482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.714510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.714683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.714727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.714961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.714990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.715152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.715177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.715333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.715361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.715486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.715515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.715659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.715685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.715843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.715868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.716017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.716047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.716178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.716203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.716325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.716350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.716525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.716554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.716701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.716727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.716855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.716880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.717002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.717031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.717202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.717227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.717322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.717347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.717500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.717529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.717686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.717711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.717847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.717872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.717993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.718021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.718159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.023 [2024-10-28 15:30:09.718185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.023 qpair failed and we were unable to recover it. 00:34:23.023 [2024-10-28 15:30:09.718326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.718351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.718471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.718500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.718662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.718689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.718823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.718876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.719071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.719100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.719217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.719244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.719401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.719426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.719584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.719611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.719800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.719827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.719968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.719997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.720137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.720165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.720329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.720356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.720529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.720558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.720727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.720757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.720897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.720933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.721083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.721127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.721286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.721331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.721489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.721519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.721663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.721706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.721803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.721830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.721974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.722000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.722150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.722178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.722353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.722382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.722521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.722546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.722677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.722703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.722859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.722888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.723045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.723070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.723196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.723225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.723377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.723405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.723550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.723576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.723723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.723765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.723892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.723921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.724099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.724126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.724234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.724275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.724413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.724442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.724576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.724602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.724736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.724762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.724904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.724933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.725102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.725129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.725250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.725292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.725425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.725452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.725585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.725611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.725727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.725753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.725938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.725967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.726158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.726184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.726354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.726382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.726550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.726578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.726706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.726733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.726833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.726858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.727030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.727060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.727228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.727254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.727386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.727429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.727533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.727560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.727679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.727707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.727861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.727887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.728030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.728059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.728215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.728244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.728416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.728444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.728574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.728602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.728748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.728776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.728919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.728944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.729077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.729106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.729224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.729249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.729386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.729410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.729619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.024 [2024-10-28 15:30:09.729648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.024 qpair failed and we were unable to recover it. 00:34:23.024 [2024-10-28 15:30:09.729776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.729801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.729971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.730012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.730169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.730198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.730353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.730379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.730562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.730589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.730724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.730753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.730863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.730889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.731046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.731071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.731228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.731255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.731386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.731413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.731572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.731612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.731778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.731807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.731971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.731997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.732110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.732134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.732319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.732348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.732489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.732515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.732645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.732678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.732802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.732830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.732982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.733023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.733132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.733157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.733337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.733365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.733486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.733513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.733686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.733729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.733831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.733859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.734004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.734029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.734134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.734166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.734360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.734402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.734599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.734640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.734855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.734893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.735105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.735144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.735340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.735368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.735506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.735551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.735656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.735693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.735822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.735847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.735960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.735987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.736118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.736145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.736279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.736304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.736449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.736475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.736599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.736625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.736747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.736773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.736875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.736902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.737051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.737079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.737241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.737267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.737392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.737432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.737578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.737606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.737763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.737789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.737885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.737911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.738054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.738082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.738228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.738254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.738426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.738465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.738641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.738703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.738910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.738944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.739085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.739122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.739323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.739365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.739557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.739595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.739767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.739816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.739962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.740002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.740222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.740258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.740403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.740455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.740602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.740632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.740803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.740830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.740986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.741015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.741125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.741154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.741304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.741330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.741463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.741489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.741700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.741728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.741867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.741894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.741999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.742026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.742168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.742197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.742335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.742361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.742526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.742568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.742710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.742746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.742862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.742889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.742995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.743021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.743190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.743220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.743390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.743416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.743546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.025 [2024-10-28 15:30:09.743570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.025 qpair failed and we were unable to recover it. 00:34:23.025 [2024-10-28 15:30:09.743721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.743752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.743864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.743890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.744029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.744065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.744214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.744243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.744371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.744397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.744499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.744527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.744645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.744690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.744819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.744844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.744953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.744980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.745155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.745183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.745317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.745343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.745527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.745554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.745684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.745730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.745874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.745899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.746059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.746085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.746227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.746257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.746383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.746409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.746571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.746626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.746784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.746811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.746958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.746983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.747162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.747204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.747322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.747352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.747495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.747522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.747685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.747712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.747811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.747836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.747960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.747987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.748137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.748188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.748296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.748324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.748505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.748531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.748691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.748734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.748838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.748866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.748984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.749011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.749139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.749180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.749288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.749316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.749466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.749499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.749627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.749676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.749814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.749843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.749983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.750019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.750109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.750134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.750278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.750307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.750420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.750444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.750569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.750595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.750706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.750735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.750875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.750901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.751037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.751078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.751233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.751263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.751451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.751476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.751616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.751639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.751790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.751817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.751929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.751955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.752081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.752123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.752262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.752288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.752449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.752474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.752576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.752602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.752743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.752769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.752871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.752896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.753034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.753059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.753223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.753249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.753402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.753429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.753566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.753595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.753720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.753745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.753851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.753878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.754043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.754068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.754228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.754255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.754421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.754451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.754562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.754590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.754723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.754750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.754846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.754872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.755016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.755042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.026 [2024-10-28 15:30:09.755193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.026 [2024-10-28 15:30:09.755218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.026 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.755317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.755344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.755467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.755492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.755603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.755628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.755764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.755792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.755917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.755950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.756155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.756182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.756302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.756328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.756464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.756492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.756657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.756686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.756785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.756813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.756964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.756989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.757145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.757175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.757288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.757317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.757415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.757444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.757587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.757617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.757782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.757808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.757897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.757940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.758131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.758156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.758342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.758372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.758510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.758540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.758702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.758728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.758823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.758850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.758986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.759017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.759174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.759199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.759374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.759403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.759547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.759582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.759724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.759751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.759839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.759865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.760000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.760030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.760180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.760220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.760355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.760400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.760515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.760545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.760750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.760777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.760879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.760905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.761043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.761073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.761222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.761248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.761437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.761467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.761606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.761636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.761794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.761821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.761985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.762027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.762205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.762235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.762363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.762409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.762577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.762620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.762785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.762811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.762962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.762993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.763167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.763197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.763341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.763370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.763523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.763553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.763717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.763745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.763875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.763901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.764044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.764086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.764250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.764278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.764392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.764423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.764540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.764581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.764724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.764751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.764851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.764877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.765023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.765050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.765212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.765242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.765387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.765416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.765576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.765601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.765786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.765816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.765946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.765976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.766106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.766147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.766274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.766301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.766451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.766480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.766599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.766624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.766781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.766809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.766933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.766964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.767086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.767122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.767229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.767255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.767438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.767477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.767623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.767649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.767788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.767830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.767984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.768014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.768171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.027 [2024-10-28 15:30:09.768197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.027 qpair failed and we were unable to recover it. 00:34:23.027 [2024-10-28 15:30:09.768348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.768374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.768526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.768554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.768680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.768725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.768855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.768881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.769026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.769054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.769172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.769197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.769313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.769340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.769441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.769472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.769648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.769712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.769811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.769841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.770025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.770056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.770200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.770227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.770355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.770380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.770527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.770556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.770675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.770727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.770851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.770876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.771001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.771032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.771145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.771186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.771333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.771359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.771507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.771538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.771675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.771718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.771844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.771869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.771979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.772008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.772186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.772213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.772388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.772416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.772510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.772540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.772648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.772695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.772824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.772851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.772982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.773012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.773151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.773176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.773306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.773340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.773484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.773514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.773612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.773638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.773763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.773790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.773964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.773995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.774153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.774179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.774347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.774377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.774511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.774541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.774679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.774717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.774841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.774867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.774991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.775021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.775161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.775186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.775336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.775380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.775485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.775513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.775641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.775673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.775759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.775785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.775949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.775990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.776119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.776146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.776278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.776305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.776421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.776456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.776626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.776673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.776818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.776845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.776971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.777001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.777128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.777154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.777307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.777348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.777475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.777505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.777657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.777685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.777818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.777845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.777974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.778004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.778143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.778169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.778290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.778317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.778452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.778482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.778639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.778676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.778799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.778827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.778920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.778965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.779126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.779152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.779288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.779331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.779458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.779488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.779614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.779668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.779784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.779811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.779967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.779993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.028 [2024-10-28 15:30:09.780121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.028 [2024-10-28 15:30:09.780163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.028 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.780290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.780317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.780468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.780497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.780612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.780674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.780849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.780877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.781022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.781068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.781190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.781218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.781356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.781383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.781518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.781547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.781662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.781703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.781830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.781857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.781963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.781994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.782130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.782156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.782269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.782296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.782448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.782478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.782655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.782686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.782805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.782832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.783010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.783041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.783211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.783237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.783396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.783426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.783523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.783552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.783690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.783719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.783818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.783846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.783931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.783975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.784129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.784156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.784308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.784350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.784469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.784498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.784603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.784630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.784784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.784811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.784900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.784943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.785081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.785122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.785244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.785270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.785411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.785441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.785564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.785590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.785702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.785729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.785880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.785907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.786043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.786069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.786257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.786286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.786444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.786473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.786594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.786621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.786754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.786781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.786873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.786900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.787072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.787113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.787261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.787290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.787396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.029 [2024-10-28 15:30:09.787426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.029 qpair failed and we were unable to recover it. 00:34:23.029 [2024-10-28 15:30:09.787557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.787583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.787728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.787756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.787874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.787901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.788025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.788051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.788194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.788221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.788359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.788389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.788556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.788583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.788733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.788760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.788885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.788912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.789094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.789120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.789241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.789283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.789412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.789441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.789567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.789593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.789712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.789739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.789875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.789906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.790043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.790085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.790206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.790232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.790374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.790403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.790540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.790567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.790668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.790695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.790786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.790813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.790938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.790964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.791085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.791112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.791253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.791282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.791396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.791421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.791551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.791578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.791700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.791732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.791832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.791858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.791947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.791974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.792101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.792131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.792287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.792313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.792427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.792454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.792596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.792626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.792763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.792790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.792940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.792982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.793082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.793111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.793233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.793273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.793390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.793417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.793590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.793620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.793758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.793785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.793948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.793992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.794113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.794147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.794262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.794289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.794387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.794413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.794523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.794552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.794664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.794691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.794816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.794843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.794981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.795010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.795180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.795205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.795303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.795329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.795468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.795497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.795657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.795690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.795778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.795805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.795946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.795975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.796122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.796148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.796268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.796294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.796443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.796471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.796633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.796669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.796836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.796864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.796984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.797013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.797156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.797197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.797318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.797344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.797452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.797482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.797586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.797628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.797774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.797800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.797939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.797969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.798102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.798143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.798281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.798323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.798460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.798489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.798503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:23.030 [2024-10-28 15:30:09.798665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.798693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.798833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.798859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.799008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.799038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.799179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.799204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.799343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.799385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.799508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.799538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.799673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.030 [2024-10-28 15:30:09.799702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.030 qpair failed and we were unable to recover it. 00:34:23.030 [2024-10-28 15:30:09.799823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.799850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.799982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.800011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.800138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.800180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.800323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.800365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.800486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.800515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.800623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.800655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.800764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.800791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.800916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.800960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.801089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.801130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.801294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.801338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.801468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.801498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.801612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.801638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.801779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.801806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.801927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.801972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.802075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.802102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.802246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.802272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.802408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.802437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.802571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.802597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.802744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.802772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.802878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.802909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.803040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.803082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.803213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.803240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.803356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.803386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.803513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.803539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.803668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.803696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.803815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.803845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.803955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.803981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.804117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.804144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.804247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.804277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.804386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.804413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.804498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.804525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.804690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.804720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.804826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.804854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.805006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.805032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.805182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.805212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.805330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.805357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.805524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.805567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.805725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.805755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.805894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.805920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.806066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.806108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.806238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.806267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.806400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.806426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.806538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.806564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.806738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.806766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.806916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.806956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.807096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.807125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.807223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.807256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.807409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.807435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.807624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.807660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.807797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.807826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.807958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.807999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.808209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.808238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.808386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.808427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.808611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.808637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.808758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.808801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.808965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.808995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.809130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.809171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.809305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.809332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.809502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.809531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.809668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.809694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.809815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.809842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.810015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.810045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.810180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.810206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.810359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.810385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.810522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.810551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.810664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.810694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.810844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.810871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.811041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.811070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.811238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.811263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.031 [2024-10-28 15:30:09.811419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.031 [2024-10-28 15:30:09.811448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.031 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.811600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.811629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.811839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.811866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.812029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.812058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.812181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.812211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.812380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.812421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.812573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.812601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.812763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.812793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.812925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.812966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.813061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.813087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.813210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.813238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.813365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.813391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.813540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.813566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.813718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.813748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.813868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.813894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.814044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.814071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.814210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.814239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.814362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.814387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.814503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.814534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.814691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.814721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.814852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.814878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.814991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.815017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.815133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.815162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.815299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.815325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.815412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.815438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.815578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.815607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.815727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.815754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.815852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.815879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.816034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.816063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.816189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.816231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.816369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.816411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.816566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.816595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.816762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.816789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.816872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.816898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.817034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.817063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.817198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.817224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.817364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.817405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.817533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.817562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.817681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.817708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.817819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.817846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.817983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.818012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.818153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.818194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.818366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.818395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.818523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.818552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.818709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.818737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.818883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.818914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.819096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.819125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.819260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.819300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.819441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.819491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.819649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.819720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.819804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.819830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.819956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.819983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.820126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.820155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.820275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.820301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.820467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.820508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.820638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.820673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.820813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.820840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.820964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.820990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.821099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.821128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.821257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.821283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.821406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.821432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.821557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.821586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.821738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.821764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.821888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.821914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.822084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.822113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.822228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.822269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.822416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.822441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.822622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.822658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.822838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.822865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.823029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.823058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.823212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.823241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.823360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.823386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.823524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.823555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.823692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.823721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.823834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.823860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.824009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.824035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.824179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.824208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.824348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.032 [2024-10-28 15:30:09.824374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.032 qpair failed and we were unable to recover it. 00:34:23.032 [2024-10-28 15:30:09.824492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.824518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.824664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.824693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.824812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.824839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.824988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.825014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.825157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.825185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.825322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.825349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.825518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.825558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.825707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.825737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.825879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.825906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.826041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.826068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.826207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.826236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.826360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.826386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.826556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.826598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.826734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.826762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.826843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.826870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.827031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.827057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.827173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.827202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.827354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.827380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.827549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.827591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.827718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.827747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.827849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.827876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.828000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.828026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.828205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.828234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.828364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.828404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.828511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.828537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.828730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.828760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.828885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.828911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.829052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.829078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.829223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.829252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.829356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.829382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.829541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.829567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.829665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.829695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.829857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.829883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.830032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.830074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.830174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.830203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.830342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.830368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.830490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.830516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.830648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.830691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.830865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.830891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.831016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.831057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.831194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.831223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.831329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.831355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.831508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.831534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.831715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.831742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.831833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.831859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.832014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.832040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.832187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.832216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.832339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.832365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.832537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.832581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.832725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.832756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.832881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.832907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.833071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.833112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.833248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.833277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.833411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.833437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.833586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.833612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.833769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.833798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.833956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.833982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.834115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.834157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.834312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.834341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.834478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.834520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.834661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.834710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.834868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.834897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.835047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.835077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.835214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.835258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.835377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.835405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.835554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.835580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.835766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.835796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.835979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.836008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.836162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.836188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.836323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.836366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.836502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.836531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.836667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.836694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.033 qpair failed and we were unable to recover it. 00:34:23.033 [2024-10-28 15:30:09.836827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.033 [2024-10-28 15:30:09.836853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.836982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.837011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.837142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.837169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.837332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.837376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.837506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.837535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.837673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.837699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.837820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.837846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.837961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.837990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.838127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.838153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.838278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.838304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.838439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.838468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.838619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.838645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.838827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.838857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.838987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.839016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.839136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.839162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.839287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.839313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.839528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.839559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.839723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.839755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.839876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.839904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.840052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.840078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.840223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.840249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.840379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.840423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.840553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.840583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.840742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.840772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.840872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.840898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.841064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.841093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.841222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.841263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.841399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.841439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.841562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.841591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.841752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.841779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.841886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.841912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.842032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.842061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.842191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.842217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.842374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.842405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.842546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.842576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.842688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.842719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.842870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.842896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.843044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.843086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.843197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.843223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.843321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.843348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.843475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.843504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.843647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.843692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.843836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.843862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.843978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.844007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.844170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.844216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.844323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.844366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.844517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.844546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.844699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.844727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.844831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.844858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.844976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.845003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.034 [2024-10-28 15:30:09.845093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.034 [2024-10-28 15:30:09.845120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.034 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.845218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.845244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.845375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.845404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.845534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.845564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.845725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.845766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.845926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.845961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.846118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.846145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.846271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.846297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.846446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.846489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.846695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.846746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.846929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.846968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.847124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.847162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.847360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.847387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.847534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.847588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.847848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.847876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.848069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.848114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.848315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.848367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.848524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.848568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.848785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.848814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.849065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.849109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.849224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.849269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.849467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.849518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.849713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.849744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.849892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.849921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.850119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.850163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.850338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.850382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.850488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.850515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.850715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.850745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.850974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.851019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.851178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.851223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.851346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.851385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.851600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.851627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.851875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.851920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.852111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.852154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.852285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.852329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.852485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.852512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.852629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.852675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.852818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.852862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.327 [2024-10-28 15:30:09.852995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.327 [2024-10-28 15:30:09.853039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.327 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.853208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.853253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.853360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.853387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.853573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.853602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.853850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.853878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.853998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.854025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.854276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.854304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.854431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.854458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.854614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.854641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.854835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.854881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.855127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.855173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.855286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.855324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.855466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.855497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.855656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.855684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.855847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.855875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.856056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.856086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.856200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.856231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.856359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.856390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.856494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.856524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.856738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.856768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.856993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.857020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.857159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.857203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.857359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.857404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.857619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.857657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.857757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.857785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.857962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.858007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.858154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.858198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.858404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.858447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.858601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.858628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.858784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.858812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.858966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.859010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.859210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.859256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.859490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.859544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.859723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.859754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.859873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.859903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.860022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.860049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.860280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.860306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.860458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.860485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.860646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.860680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.860848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.860893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.328 [2024-10-28 15:30:09.861015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.328 [2024-10-28 15:30:09.861060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.328 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.861213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.861265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.861417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.861443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.861593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.861620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.861798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.861826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.861947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.861991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.862170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.862209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.862445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.862473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.862663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.862690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.862827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.862854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.862985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.863040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.863195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.863238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.863430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.863474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.863606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.863634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.863778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.863823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.864061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.864106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.864348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.864392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.864558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.864592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.864837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.864882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.865111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.865156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.865399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.865444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.865658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.865687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.865882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.865909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.866084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.866133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.866282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.866327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.866463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.866490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.866711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.866739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.866914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.866971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.867220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.867265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.867525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.867571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.867740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.867768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.867916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.867969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.868183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.868227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.868397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.868442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.868537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.868565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.868731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.868776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.868877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.868904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.869034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.869062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.869189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.869216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.869399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.869427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.329 [2024-10-28 15:30:09.869530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.329 [2024-10-28 15:30:09.869558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.329 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.869728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.869762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.869861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.869890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.869995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.870023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.870149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.870176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.870295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.870344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.870477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.870531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.870708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.870739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.870918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.870973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.871172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.871217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.871380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.871432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.871544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.871577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.871713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.871742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.871876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.871902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.872094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.872090] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:23.330 [2024-10-28 15:30:09.872125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events[2024-10-28 15:30:09.872125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b9 at runtime. 00:34:23.330 0 with addr=10.0.0.2, port=4420 00:34:23.330 [2024-10-28 15:30:09.872144] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is theqpair failed and we were unable to recover it. 00:34:23.330 only 00:34:23.330 [2024-10-28 15:30:09.872161] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:23.330 [2024-10-28 15:30:09.872174] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:23.330 [2024-10-28 15:30:09.872239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.872266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.872375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.872402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.872591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.872619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.872768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.872796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.872961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.872990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.873121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.873150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.873321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.873350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.873453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.873483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.873614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.873647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.873806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.873840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.873988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.874016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.874150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.874098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:23.330 [2024-10-28 15:30:09.874180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.874156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:23.330 [2024-10-28 15:30:09.874187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:34:23.330 [2024-10-28 15:30:09.874191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:23.330 [2024-10-28 15:30:09.874334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.874364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.874539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.874567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.874722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.874748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.874852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.874878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.875075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.875103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.875229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.875266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.875438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.875468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.875607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.875636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.875837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.875863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.875980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.876007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.330 [2024-10-28 15:30:09.876140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.330 [2024-10-28 15:30:09.876182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.330 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.876323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.876353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.876534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.876564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.876717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.876744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.876913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.876958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.877088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.877114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.877215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.877241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.877357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.877385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.877552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.877582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.877752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.877780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.877954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.877999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.878160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.878189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.878276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.878303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.878446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.878476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.878585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.878612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.878752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.878780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.878870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.878897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.879040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.879066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.879159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.879185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.879349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.879379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.879500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.879543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.879671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.879716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.879824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.879851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.879945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.879971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.880128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.880173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.880394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.880424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.880534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.880560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.880752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.880780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.880905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.880947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.881046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.881072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.881189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.881216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.881338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.881368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.881523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.881552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.881727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.881770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.881877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.881918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.331 qpair failed and we were unable to recover it. 00:34:23.331 [2024-10-28 15:30:09.882060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.331 [2024-10-28 15:30:09.882087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.882218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.882246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.882386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.882423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.882596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.882626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.882811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.882840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.882991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.883020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.883157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.883184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.883331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.883374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.883502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.883531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.883639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.883677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.883788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.883814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.883956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.883986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.884115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.884141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.884299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.884342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.884466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.884495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.884627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.884678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.884802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.884829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.884990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.885019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.885205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.885231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.885410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.885441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.885636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.885680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.885809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.885835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.885956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.885983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.886144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.886173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.886351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.886380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.886488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.886517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.886657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.886701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.886820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.886847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.886977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.887020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.887211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.887246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.887476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.887507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.887680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.887724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.887852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.887879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.888012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.888039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.888207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.888245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.888398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.888427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.888588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.888617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.888770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.888797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.888950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.888980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.889115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.889141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.332 [2024-10-28 15:30:09.889272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.332 [2024-10-28 15:30:09.889299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.332 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.889452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.889482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.889633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.889666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.889764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.889790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.889886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.889923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.890052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.890079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.890204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.890230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.890412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.890453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.890581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.890607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.890732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.890759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.890883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.890924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.891055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.891082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.891213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.891240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.891392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.891421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.891547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.891573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.891704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.891732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.891843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.891883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.892035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.892062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.892225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.892274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.892412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.892442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.892623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.892671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.892799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.892828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.892933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.892962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.893192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.893219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.893353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.893384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.893548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.893578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.893708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.893735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.893863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.893890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.894084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.894114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.894260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.894286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.894479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.894508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.894675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.894718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.894867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.894893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.895062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.895091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.895210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.333 [2024-10-28 15:30:09.895240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.333 qpair failed and we were unable to recover it. 00:34:23.333 [2024-10-28 15:30:09.895435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.895462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.895625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.895663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.895778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.895808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.895917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.895944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.896108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.896157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.896324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.896354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.896548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.896574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.896666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.896693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.896809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.896838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.896957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.896993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.897116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.897142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.897300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.897331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.897486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.897512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.897639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.897677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.897848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.897877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.898006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.898032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.898179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.898207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.898359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.898387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.898517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.898543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.898669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.898702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.898829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.898858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.899012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.899042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.899148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.899175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.899405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.899435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.899606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.899635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.899788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.899816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.899923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.899967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.900103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.900131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.900362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.900391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.900575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.900604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.900747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.900774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.900890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.900917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.901048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.901078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.901200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.901227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.901372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.901399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.901536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.901572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.901749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.901777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.901929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.901964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.902102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.902136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.902293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.334 [2024-10-28 15:30:09.902329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.334 qpair failed and we were unable to recover it. 00:34:23.334 [2024-10-28 15:30:09.902470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.902496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.902625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.902659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.902792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.902819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.902944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.902971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.903110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.903140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.903248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.903276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.903416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.903445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.903581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.903612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.903772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.903803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.903935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.903989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.904137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.904172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.904358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.904388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.904549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.904585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.904688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.904719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.904847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.904874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.905027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.905099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.905278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.905311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.905488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.905516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.905617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.905644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.905787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.905817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.905923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.905949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.906090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.906118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.906233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.906263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.906395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.906422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.906547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.906574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.906696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.906726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.906851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.906879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.907011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.907037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.907159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.907187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.907327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.907355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.907480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.907507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.907617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.907647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.907805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.907832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.907934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.907961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.908111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.335 [2024-10-28 15:30:09.908140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.335 qpair failed and we were unable to recover it. 00:34:23.335 [2024-10-28 15:30:09.908315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.908341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.908471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.908515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.908621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.908659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.908771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.908797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.908920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.908946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.909061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.909089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.909208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.909236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.909360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.909387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.909516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.909546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.909662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.909689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.909789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.909816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.909987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.910018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.910145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.910172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.910269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.910296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.910407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.910441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.910559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.910586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.910701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.910729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.910842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.910871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.910999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.911026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.911153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.911181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.911294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.911324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.911497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.911528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.911670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.911717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.911845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.911871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.911967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.911995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.912092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.912118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.912400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.912430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.912564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.912589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.912713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.912740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.912863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.912892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.913050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.913076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.913168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.913194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.913351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.913379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.913498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.913526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.913662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.913689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.913807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.913836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.913964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.913991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.336 [2024-10-28 15:30:09.914091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.336 [2024-10-28 15:30:09.914117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.336 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.914225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.914256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.914397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.914423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.914550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.914577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.914723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.914750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.914849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.914875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.914969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.914996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.915129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.915158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.915266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.915294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.915414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.915440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.915543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.915573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.915688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.915716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.915838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.915865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.915973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.916002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.916114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.916140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.916259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.916285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.916387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.916415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.916539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.916572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.916682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.916709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.916821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.916851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.917017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.917043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.917165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.917209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.917372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.917401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.917517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.917544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.917685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.917731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.917858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.917890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.918004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.918031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.918181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.918208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.918357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.918386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.918512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.918539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.918641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.918675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.918808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.918838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.918941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.918967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.919120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.919147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.919314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.919343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.919452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.919478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.919663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.919711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.919812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.919839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.919935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.919961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.920090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.920117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.337 [2024-10-28 15:30:09.920292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.337 [2024-10-28 15:30:09.920321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.337 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.920428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.920455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.920558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.920587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.920688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.920721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.920843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.920877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.920967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.920994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.921127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.921156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.921259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.921285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.921385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.921412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.921533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.921561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.921704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.921732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.921840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.921866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.922011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.922040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.922174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.922200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.922325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.922352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.922466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.922496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.922617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.922644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.922755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.922781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.922896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.922925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.923061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.923088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.923205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.923231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.923375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.923405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.923566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.923596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.923723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.923750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.923877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.923904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.924022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.924048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.924148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.924174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.924322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.924352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.924457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.924483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.924626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.924664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.924798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.924825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.924960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.924992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.925116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.925160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.925268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.925297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.925431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.925459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.925581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.925607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.925724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.925768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.925864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.925891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.926040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.926066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.926237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.926267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.926389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.338 [2024-10-28 15:30:09.926415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.338 qpair failed and we were unable to recover it. 00:34:23.338 [2024-10-28 15:30:09.926568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.926594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.926738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.926765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.926863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.926890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.927015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.927041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.927219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.927249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.927382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.927409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.927525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.927552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.927694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.927725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.927838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.927865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.927987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.928013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.928135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.928166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.928274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.928301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.928444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.928490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.928618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.928660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.928777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.928804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.928906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.928934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.929059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.929088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.929219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.929253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.929352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.929378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.929520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.929548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.929689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.929718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.929809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.929835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.930000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.930030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.930158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.930184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.930337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.930380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.930536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.930566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.930678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.930705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.930805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.930832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.930994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.931023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.931155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.931182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.931305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.931333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.931481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.931511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.931645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.931678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.931773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.931800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.931893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.931920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.932103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.932129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.932251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.932295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.932455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.932484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.932641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.932698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.932799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.932845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.339 [2024-10-28 15:30:09.933003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.339 [2024-10-28 15:30:09.933032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.339 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.933139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.933165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.933291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.933319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.933424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.933453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.933617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.933657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.933762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.933806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.933932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.933961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.934115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.934143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.934236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.934264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.934416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.934445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.934556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.934582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.934675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.934702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.934814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.934842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.934966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.934993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.935084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.935110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.935243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.935272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.935399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.935426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.935554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.935582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.935724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.935752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.935856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.935882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.936003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.936030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.936166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.936196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.936325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.936352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.936503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.936547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.936645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.936689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.936801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.936827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.936975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.937001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.937115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.937144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.937309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.937335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.937433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.937461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.937627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.937664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.937785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.937816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.937949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.937976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.938125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.938154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.938318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.938345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.938472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.340 [2024-10-28 15:30:09.938516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.340 qpair failed and we were unable to recover it. 00:34:23.340 [2024-10-28 15:30:09.938616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.938645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.938792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.938819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.938946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.938973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.939087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.939116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.939250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.939277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.939406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.939435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.939540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.939569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.939705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.939732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.939835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.939863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.940006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.940036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.940148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.940175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.940298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.940326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.940440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.940470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.940576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.940602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.940776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.940805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.940952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.940982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.941118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.941144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.941268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.941296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.941470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.941500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.941602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.941630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.941772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.941800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.941947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.941978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.942138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.942165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.942257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.942285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.942456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.942487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.942615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.942670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.942811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.942838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.942964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.942993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.943106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.943132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.943283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.943312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.943421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.943451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.943555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.943582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.943716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.943744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.943869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.943896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.944026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.944054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.944145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.944173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.944329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.944359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.944464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.944491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.944619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.944647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.944787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.944817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.341 qpair failed and we were unable to recover it. 00:34:23.341 [2024-10-28 15:30:09.944952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.341 [2024-10-28 15:30:09.944979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.945129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.945171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.945304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.945332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.945470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.945496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.945590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.945619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.945761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.945788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.945874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.945901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.946025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.946052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.946158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.946188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.946302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.946329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.946453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.946481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.946621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.946667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.946812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.946839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.946967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.946993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.947106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.947135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.947266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.947293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.947446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.947489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.947656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.947687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.947795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.947822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.947980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.948007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.948166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.948196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.948326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.948353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.948452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.948485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.948620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.948657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.948804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.948830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.948950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.948976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.949148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.949177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.949290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.949316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.949440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.949468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.949608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.949638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.949811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.949839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.949956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.949999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.950129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.950159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.950284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.950311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.950436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.950463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.950614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.950643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.950801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.950828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.950949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.950992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.342 [2024-10-28 15:30:09.951124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.342 [2024-10-28 15:30:09.951153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.342 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.951314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.951340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.951459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.951503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.951633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.951669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.951836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.951864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.951985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.952027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.952123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.952153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.952279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.952306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.952431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.952458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.952570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.952599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.952732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.952761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.952907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.952955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.953081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.953110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.953270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.953297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.953398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.953426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.953571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.953601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.953749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.953777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.953899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.953927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.954071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.954100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.954242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.954269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.954395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.954423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.954565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.954594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.954705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.954732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.954863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.954890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.955044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.955074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.955213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.955239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.955363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.955391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.955571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.955601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.955712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.955740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.955895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.955922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.956061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.956091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.956246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.956273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.956448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.956479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.956613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.956642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.956778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.956806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.956899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.956926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.957042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.957071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.957205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.957233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.957386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.957424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.343 [2024-10-28 15:30:09.957571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.343 [2024-10-28 15:30:09.957601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.343 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.957774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.957801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.957927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.957955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.958075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.958105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.958281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.958308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.958392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.958418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.958526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.958555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.958691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.958719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.958874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.958903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.959080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.959110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.959273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.959300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.959469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.959499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.959660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.959690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.959812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.959839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.959973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.960000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.960162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.960192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.960293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.960321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.960450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.960478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.960586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.960615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.960792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.960820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.960982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.961012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.961101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.961131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.961259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.961286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.961385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.961412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.961564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.961595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.961728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.961755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.961879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.961910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.962034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.962064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.962197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.962223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.962379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.962423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.962587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.962616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.962805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.962833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.962923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.962967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.963105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.963135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.963273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.963300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.963451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.344 [2024-10-28 15:30:09.963497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.344 qpair failed and we were unable to recover it. 00:34:23.344 [2024-10-28 15:30:09.963599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.963629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.963814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.963841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.963944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.963986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.964117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.964147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.964282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.964309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.964401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.964429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.964565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.964594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.964736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.964763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.964861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.964887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.965027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.965056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.965184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.965211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.965336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.965363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.965532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.965562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.965695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.965721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.965872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.965900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.966068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.966098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.966227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.966253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.966413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.966457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.966555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.966584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.966717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.966744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.966869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.966896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.967042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.967071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.967235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.967262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.967396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.967442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.967606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.967636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.967794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.967820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.967987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.968015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.968143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.968173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.968307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.968334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.968440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.968472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.968642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.968690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.968827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.968854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.968946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.968972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.969082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.969111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.969275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.969301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.969428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.969473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.969581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.969610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.969748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.969776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.969906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.969932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.970109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.345 [2024-10-28 15:30:09.970139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.345 qpair failed and we were unable to recover it. 00:34:23.345 [2024-10-28 15:30:09.970303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.970330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.970486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.970518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.970677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.970707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.970870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.970897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.971016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.971059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.971181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.971210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.971347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.971373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.971566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.971614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.971829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.971883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.972071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.972105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.972275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.972310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.972491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.972525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.972690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.972719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.972841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.972887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.973019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.973049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.973260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.973287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.973446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.973476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.973731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.973769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.973903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.973932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.974090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.974146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.974320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.974350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.974488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.974521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.974732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.974776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.974904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.974950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.975101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.975140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.975286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.975330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.975443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.975473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.975610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.975636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.975753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.975780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.975921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.975962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.976097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.976134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.976280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.976325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.976444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.976474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.976644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.976678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.976847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.976873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.976991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.977021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.977185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.977222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.977324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.977366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.977526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.977556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.977673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.977700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.346 qpair failed and we were unable to recover it. 00:34:23.346 [2024-10-28 15:30:09.977827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.346 [2024-10-28 15:30:09.977852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.977989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.978019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.978180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.978206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.978385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.978415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.978595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.978625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.978758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.978786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.978960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.978999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.979181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.979209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.979320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.979345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.979497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.979525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.979697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.979727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.979859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.979886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.980037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.980076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.980245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.980273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.980437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.980463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.980600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.980671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.980911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.980952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.981158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.981192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.981348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.981378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.981483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.981513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.981675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.981705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.981822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.981849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.981981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.982010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.982183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.982209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.982332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.982376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.982549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.982594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.982774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.982803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.982965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.983007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.983119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.983159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.983368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.983395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.983536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.983567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.983727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.983755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.983955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.983983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.984128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.984159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.984340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.984372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.984502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.984531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.984771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.984799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.984986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.985017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.985184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.985222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.985411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.985448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.347 [2024-10-28 15:30:09.985560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.347 [2024-10-28 15:30:09.985589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.347 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.985741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.985780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.985989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.986022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.986281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.986312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.986449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.986481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.986636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.986696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.986809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.986840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.986992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.987020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.987216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.987245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.987418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.987448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.987573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.987601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.987807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.987838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.988038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.988083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.988247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.988276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.988406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.988450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.988582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.988612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.988782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.988810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.988904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.988930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.989077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.989106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.989227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.989253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.989406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.989433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.989599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.989636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.989796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.989829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.989962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.990006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.990201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.990232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.990370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.990396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.990524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.990550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.990729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.990757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.990909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.990936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.991057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.991102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.991233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.991263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.991397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.991428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.991579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.991622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.991781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.991810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.991933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.991961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.992087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.992132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.992350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.992381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.992549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.992577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.992781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.992812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.348 qpair failed and we were unable to recover it. 00:34:23.348 [2024-10-28 15:30:09.992967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.348 [2024-10-28 15:30:09.992998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.993164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.993190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.993360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.993390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.993520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.993561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.993676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.993703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.993825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.993851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.993999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.994028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.994156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.994183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.994299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.994326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.994471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.994501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.994666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.994698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.994795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.994823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.994975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.995020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.995206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.995233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.995374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.995410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.995576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.995612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.995835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.995864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.996037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.996078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.996239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.996288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.996456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.996503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.996673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.996730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.996950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.996996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.997174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.997206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.997353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.997384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.997479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.997510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.997694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.997734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.997898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.997926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.998054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.998081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.998203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.998233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.998373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.998399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.998502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.998529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.998677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.998724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.998847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.998874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.999007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.999050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.999151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.999181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.999346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.999373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.999463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.999490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.999658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.349 [2024-10-28 15:30:09.999718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.349 qpair failed and we were unable to recover it. 00:34:23.349 [2024-10-28 15:30:09.999906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:09.999934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.000063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.000094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.000254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.000284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.000412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.000438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.000583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.000610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.000758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.000787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.000945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.000972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.001115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.001145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.001287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.001318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.001428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.001455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.001552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.001579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.001688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.001734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.001836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.001863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.001961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.001987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.002147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.002189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.002374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.002401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.002582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.002624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.002818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.002846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.002997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.003035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.003132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.003175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.003349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.003390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.003523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.003565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.003734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.003763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.003889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.003916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.004168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.004196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.004322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.004353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.004510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.004549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.004711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.004739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.004891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.004927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.005066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.005096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.005230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.005257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.005379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.005406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.005545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.005574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.005710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.005738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.005861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.005888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.006018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.006059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.006279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.006305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.006422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.006453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.006664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.006721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.350 [2024-10-28 15:30:10.006840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.350 [2024-10-28 15:30:10.006866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.350 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.006977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.007004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.007131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.007160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.007318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.007346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.007561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.007602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.007769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.007796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.007921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.007948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.008137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.008168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.008295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.008325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.008431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.008458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.008586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.008613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.008750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.008778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.008903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.008930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.009110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.009140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.009278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.009308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.009442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.009471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.009615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.009642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.009749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.009776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.009919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.009947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.010084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.010129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.010278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.010308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.010511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.010553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.010705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.010738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.010871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.010914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.011037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.011073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.011231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.011271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.011409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.011444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.011572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.011607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.011741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.011781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.011924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.011965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.012118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.012164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.012314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.012355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.012504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.012543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.012688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.012727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.012866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.012895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.013001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.013029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.013145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.013173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.013265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.013292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.013420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.013447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.013596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.013627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.351 [2024-10-28 15:30:10.013760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.351 [2024-10-28 15:30:10.013788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.351 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.013935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.013962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.014094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.014124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.014272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.014298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.014443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.014487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.014615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.014645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.014786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.014814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.014964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.015006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.015146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.015183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.015356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.015391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.015503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.015531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.015716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.015745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.015870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.015909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.016010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.016037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.016170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.016200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.016358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.016400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.016545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.016576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.016737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.016765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.016872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.016911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.017067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.017096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.017252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.017279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.017414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.017458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.017598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.017633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.017758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.017785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.017895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.017922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.018072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.018114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.018225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.018252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.018436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.018482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.018669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.018699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.018806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.018833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.018962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.018989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.019127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.019169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.019340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.019368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.019476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.019503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.019681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.019711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.019827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.019854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.019980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.020007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.020147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.020177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.020353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.020380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.020495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.020538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.020736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.352 [2024-10-28 15:30:10.020764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.352 qpair failed and we were unable to recover it. 00:34:23.352 [2024-10-28 15:30:10.020864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.020891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.021021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.021048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.021166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.021196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.021328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.021357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.021527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.021571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.021673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.021709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.021825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.021852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.022002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.022029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.022189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.022231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.022361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.022395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.022594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.022623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.022729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.022758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.022889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.022916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.023036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.023063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.023202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.023232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.023388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.023416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.023513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.023539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.023700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.023739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.023847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.023873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.023979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.024007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.024183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.024212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.024347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.024377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.024519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.024570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.024760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.024791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.024938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.024965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.025076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.025102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.025255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.025285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.025425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.025452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.025585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.025629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.025762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.025789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.025882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.025915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.026008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.026035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.026174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.026202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.353 [2024-10-28 15:30:10.026317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.353 [2024-10-28 15:30:10.026355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.353 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.026526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.026571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.026747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.026778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.026888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.026914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.027014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.027041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.027209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.027239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.027376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.027403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.027562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.027607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.027769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.027799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.027939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.027965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.028105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.028148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.028249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.028278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.028383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.028409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.028545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.028571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.028731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.028777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.028918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.028948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.029076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.029103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.029234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.029264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.029433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.029459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.029591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.029638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.029755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.029786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.029927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.029953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.030089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.030116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.030264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.030305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.030494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.030525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.030712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.030739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.030832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.030857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.031032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.031059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.031180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.031209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.031355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.031384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.031510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.031538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.031749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.031781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.031884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.031913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.032040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.032067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.032230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.032273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.032410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.032440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.032587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.032614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.032765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.032792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.032896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.032922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.033115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.033142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.033239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.354 [2024-10-28 15:30:10.033266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.354 qpair failed and we were unable to recover it. 00:34:23.354 [2024-10-28 15:30:10.033421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.033450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.033568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.033595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.033705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.033732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.033828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.033854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.033993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.034021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.034116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.034143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.034253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.034283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.034436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.034463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.034660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.034689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.034814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.034843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.034977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.035004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.035174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.035217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.035349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.035390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.035497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.035524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.035678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.035709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.035827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.035856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.035967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.035994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.036158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.036185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.036314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.036344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.036481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.036518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.036640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.036702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.036824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.036853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.037048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.037075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.037159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.037200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.037302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.037331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.037459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.037485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.037659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.037703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.037831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.037872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.038012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.038039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.038184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.038210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.038371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.038399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.038555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.038585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.038710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.038737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.038837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.038864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.038958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.038984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.039089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.039116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.039276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.039305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.039410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.039437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.039556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.039582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.039700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.355 [2024-10-28 15:30:10.039729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.355 qpair failed and we were unable to recover it. 00:34:23.355 [2024-10-28 15:30:10.039866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.039894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.040022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.040049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.040200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.040230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.040415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.040442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.040585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.040614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.040738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.040767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.040876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.040902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.041030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.041056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.041196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.041226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.041366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.041392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.041549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.041592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.041746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.041777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.041912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.041938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.042100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.042143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.042281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.042323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.042456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.042481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.042602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.042628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.042766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.042796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.042957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.042983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.043118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.043161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.043297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.043332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.043462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.043491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.043589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.043618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.043796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.043824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.043921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.043948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.044057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.044083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.044228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.044258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.044391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.044418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.044548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.044575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.044749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.044780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.044909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.044935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.045074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.045101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.045267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.045297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.045468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.045506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.045656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.045685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.045793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.045821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.045990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.046017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.046119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.046145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.046293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.046324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.046506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-10-28 15:30:10.046533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-10-28 15:30:10.046655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.046700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.046833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.046861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.047015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.047042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.047170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.047196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.047347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.047375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.047508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.047534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.047635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.047680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.047788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.047817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.047994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.048020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.048182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.048210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.048368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.048397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.048575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.048605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.048728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.048754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.048877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.048904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.049053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.049085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.049209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.049235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.049413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.049443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.049596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.049623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.049767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.049812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.049944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.049975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.050126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.050152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.050279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.050307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.050430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.050460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.050579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.050606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.050727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.050755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.050897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.050928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.051091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.051117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.051243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.051287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.051438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.051468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.051613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.051640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.051836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.051868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.051979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.052015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.052183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.052210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.052332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.052376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.052519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.052549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.052697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-10-28 15:30:10.052725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-10-28 15:30:10.052817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.052844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.053015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.053046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.053187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.053221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.053361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.053405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.053572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.053602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.053762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.053790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.053891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.053918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.054070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.054115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.054235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.054262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.054407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.054435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.054579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.054609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.054728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.054755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.054888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.054915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.055120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.055151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.055280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.055313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.055437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.055476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.055628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.055683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.055814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.055842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.055979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.056029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.056160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.056192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.056358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.056392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.056565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.056602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.056726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.056757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.056906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.056934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.057098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.057142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.057242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.057272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.057428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.057455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.057569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.057596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.057769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.057809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.057947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.057974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.058132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.058176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.058313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.058344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.058489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.058516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.058639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.058704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-10-28 15:30:10.058834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-10-28 15:30:10.058878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.059051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.059078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.059227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.059272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.059444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.059484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.059589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.059615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.059758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.059785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.059937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.059968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.060107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.060134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.060255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.060282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.060390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.060424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.060561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.060607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.060773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.060804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.060937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.060973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.061097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.061141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.061289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.061330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.061464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.061509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.061634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.061679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.061807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.061849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.061981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.062023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.062164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.062202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.062327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.062355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.062452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.062480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.062623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.062658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.062770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.062797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.062893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.062925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.063023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.063050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.063192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.063244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.063415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.063469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.063623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.063667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.063816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.063846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.064008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.064052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.064234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.064271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.064421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.064459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.064679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.064728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.064831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.064860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.064984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.065012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.065141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.065185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.065343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.065373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.065519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.065565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-10-28 15:30:10.065735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-10-28 15:30:10.065766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.065905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.065949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.066092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.066118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.066254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.066298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 15:30:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:23.360 [2024-10-28 15:30:10.066485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.066517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.360 15:30:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.066667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.066696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 15:30:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:23.360 [2024-10-28 15:30:10.066821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.066848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 15:30:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:23.360 [2024-10-28 15:30:10.066966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.066993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 15:30:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:23.360 [2024-10-28 15:30:10.067126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.067153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.067287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.067357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.067515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.067570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.067748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.067789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.067910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.067938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.068065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.068094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.068250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.068279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.068429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.068460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.068608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.068636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.068922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.068963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.069107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.069155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.069328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.069374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.069511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.069558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.069715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.069745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.069853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.069881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.070004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.070037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.070210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.070238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.070338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.070386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.070524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.070551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.070672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.070700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.070822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.070850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.070947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.070975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.071110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.071138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.071280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.071308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.071495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.071522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.071648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.071689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.071832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.071876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.072006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.072050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.072154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.072182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.072291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.072320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-10-28 15:30:10.072479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-10-28 15:30:10.072507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.072598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.072625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.072769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.072810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.072982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.073010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.073116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.073143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.073268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.073296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.073421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.073448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.073561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.073587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.073712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.073750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.073888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.073934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.074079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.074125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.074282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.074329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.074457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.074497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.074745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.074798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.074953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.075000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.075100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.075128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.075243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.075270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.075398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.075425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.075576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.075604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.075716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.075744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.075840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.075867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.075967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.075994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.076171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.076199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.076347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.076375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.076479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.076507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.076712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.076743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.076868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.076905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.077052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.077079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.077196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.077223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.077322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.077350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.077451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.077490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.077718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.077747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.077845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.077873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.077995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.078023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.078143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.078170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.078329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.078366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.078503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.078530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.078635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.078670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.078778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.078806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.078939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.078967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-10-28 15:30:10.079119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-10-28 15:30:10.079146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.079271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.079299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.079455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.079483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.079577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.079605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.079751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.079793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9d570 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.079913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.079953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.080080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.080107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.080215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.080245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.080370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.080399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.080508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.080537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.080695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.080726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.080835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.080864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.080972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.081009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.081181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.081227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.081352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.081397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.081526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.081554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.081705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.081736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.081854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.081881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.081990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.082018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.082175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.082204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.082375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.082404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.082520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.082560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.082677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.082716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.082811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.082838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.082964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.082991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.083119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.083147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.083388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.083416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.083526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.083563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea4c000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.083735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.083776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.083883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.083919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.084108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.084138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.084327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.084357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.084560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.084588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.084774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.084804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.084980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-10-28 15:30:10.085035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-10-28 15:30:10.085165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.085208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.085379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.085427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.085536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.085564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.085717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.085764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.085855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.085883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.086026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.086053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.086246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.086273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.086402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.086430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.086576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.086603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.086752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.086780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.086886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.086913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.087051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.087078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.087224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.087251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.087381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.087408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.087552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.087580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.087714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.087761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.087876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.087925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.088104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.088138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.088257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.088294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.088423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.088451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.088635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.088670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.088785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.088830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.089017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.089045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.089185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.089241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.089332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.089359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.089472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.089499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.089719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.089765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.089890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.089947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.090087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.090116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 15:30:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.090212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.090240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.090403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.090433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 15:30:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:23.363 [2024-10-28 15:30:10.090629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.090664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 15:30:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.363 [2024-10-28 15:30:10.090785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.090812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 15:30:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:23.363 [2024-10-28 15:30:10.090964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.091003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.091158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.091185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.091308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.091335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.091467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-10-28 15:30:10.091495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-10-28 15:30:10.091585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.091612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.091752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.091780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.091882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.091911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.092030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.092058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.092182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.092209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.092338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.092366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.092494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.092522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.092622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.092658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.092787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.092814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.092918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.092945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.093064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.093091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.093213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.093240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.093394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.093422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.093596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.093624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.093776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.093803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.093916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.093943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.094066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.094094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.094250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.094277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.094494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.094537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.094729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.094757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.094856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.094888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.095019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.095046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.095185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.095213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.095449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.095479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.095734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.095780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.095881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.095909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.095996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.096023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.096202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.096229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.096410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.096438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.096611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.096639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.096788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.096833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.097068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.097112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.097252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.097297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.097464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.097491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.097656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.097683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.097800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.097830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.097996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.098039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.098204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.098248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.098359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.098403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-10-28 15:30:10.098506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-10-28 15:30:10.098533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.098664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.098692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.098807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.098835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.098927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.098954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.099100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.099128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.099228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.099255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.099390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.099418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.099539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.099567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.099697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.099725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.099829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.099856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.099958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.099986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.100139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.100166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.100289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.100316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.100463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.100491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.100640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.100675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.100764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.100791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.100883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.100910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.101061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.101088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.101196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.101224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.101353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.101389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.101519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.101546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.101710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.101763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.101873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.101902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.102057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.102096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.102259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.102285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.102408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.102435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.102580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.102606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.102731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.102760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.102850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.102877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.103002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.103028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.103153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.103180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.103280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.103308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.103434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.103461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.103560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.103588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.103719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.103747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.103841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.103868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.104028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.104069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.104213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.104243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.104370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.104397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.104553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.104579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-10-28 15:30:10.104680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-10-28 15:30:10.104708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.104813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.104839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.104976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.105023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.105134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.105186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.105315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.105371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.105502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.105529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.105636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.105672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.105772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.105802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.105942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.105969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.106107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.106134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.106270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.106298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.106432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.106459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.106549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.106576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.106722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.106750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.106851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.106879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.106993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.107019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.107184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.107211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.107393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.107423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.107559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.107589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.107724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.107757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.107891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.107921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.108049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.108077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.108239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.108269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.108430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.108475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.108604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.108631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.108760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.108788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.108899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.108929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.109063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.109108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.109261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.109288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.109410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.109437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.109561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.109588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.109715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.109746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.109894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.109921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.110054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.110081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.110216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.110243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.110395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.110424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.110558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.110586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-10-28 15:30:10.110758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-10-28 15:30:10.110803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.110927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.110971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.111090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.111134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.111235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.111262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.111360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.111387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.111550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.111580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.111699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.111727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.111823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.111850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.111972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.111999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.112147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.112184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.112414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.112441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.112621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.112655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.112767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.112798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.112925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.112955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.113093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.113121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.113246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.113275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.113429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.113456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.113584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.113611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.113716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.113744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.113837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.113864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.113990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.114017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.114169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.114196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.114324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.114355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.114447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.114474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.114599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.114625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.114755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.114796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.114926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.114953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.115145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.115171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.115294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.115319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.115494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.115530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.115665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.115692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.115815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.115844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.115978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.116007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.116209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.116238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.116350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.116380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.116521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.116550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.116702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.116728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.116825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.116855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.117025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.117071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.117215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.117260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.117365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.117418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.117579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.117607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.117736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.117782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-10-28 15:30:10.117887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-10-28 15:30:10.117917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.118059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.118086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.118324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.118352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.118474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.118513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.118730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.118757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.118864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.118892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.119086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.119114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.119336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.119365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.119548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.119576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.119762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.119809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.119977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.120009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.120202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.120231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.120368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.120397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.120542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.120573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.120754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.120782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.120896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.120942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.121052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.121082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.121224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.121269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.121508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.121535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.121729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.121779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.121880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.121907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.122059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.122086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.122219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.122245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.122469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.122495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.122648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.122682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.122780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.122806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.122952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.122979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.123148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.123200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.123398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.123425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.123549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.123576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.123713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.123743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.123865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.123895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.124029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.124059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.124276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.124321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.124407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.124434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.124562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.124589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.124739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.124767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.124864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.124891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.125025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.125051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.125174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.125201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.125334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.125361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.125559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.125587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.125730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.125775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-10-28 15:30:10.125939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-10-28 15:30:10.125976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.126218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.126264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.126378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.126406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.126613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.126641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.126785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.126830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.127075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.127120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.127304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.127349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.127568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.127595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.127745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.127791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.127908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.127961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.128123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.128178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.128330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.128376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.128587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.128615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.128760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.128806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.128935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.128981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.129109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.129136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.129296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.129327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.129554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.129582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.129757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.129803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.129903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.129933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.130112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.130156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.130290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.130334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.130564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.130592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.130777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.130821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.130949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.130994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.131105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.131161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.131336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.131366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.131478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.131505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.131690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.131718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.131829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.131859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.132051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.132096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.132233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.132278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.132375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.132403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.132532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.132559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.132707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.132735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.132842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.132869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.132975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.133002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.133174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.133201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.133307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.133335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.133481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.133508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.133637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.133670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.133804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.133831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.133957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.133984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.134157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.134189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.134322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.134349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-10-28 15:30:10.134534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-10-28 15:30:10.134562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.134729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.134756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.134859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.134886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.134987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.135015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.135163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.135191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.135292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.135320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.135466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.135493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.135623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.135657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.135816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.135843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.135964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.135991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.136124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.136160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.136305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.136338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.136464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.136492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.136610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.136637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.136757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.136800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.136916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.136944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.137077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.137105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.137240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.137267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.137399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.137426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.137556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.137583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.137731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.137761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.137860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.137890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.138037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.138082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.138248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.138275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.138444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.138471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.138561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.138588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.138740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.138792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.138937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.138971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.139125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.139165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 Malloc0 00:34:23.370 [2024-10-28 15:30:10.139360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.139406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.139502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.139529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.139662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.139691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 15:30:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.370 [2024-10-28 15:30:10.139797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.139827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 15:30:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:23.370 [2024-10-28 15:30:10.139967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.139997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 15:30:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.370 [2024-10-28 15:30:10.140154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.140181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 15:30:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:23.370 [2024-10-28 15:30:10.140304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.140331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.140498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-10-28 15:30:10.140533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-10-28 15:30:10.140684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-10-28 15:30:10.140712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-10-28 15:30:10.140865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-10-28 15:30:10.140893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-10-28 15:30:10.141011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-10-28 15:30:10.141038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-10-28 15:30:10.141146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-10-28 15:30:10.141180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-10-28 15:30:10.141331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-10-28 15:30:10.141358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-10-28 15:30:10.141453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-10-28 15:30:10.141480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-10-28 15:30:10.141631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-10-28 15:30:10.141665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-10-28 15:30:10.141794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-10-28 15:30:10.141821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-10-28 15:30:10.141971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-10-28 15:30:10.141999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-10-28 15:30:10.142131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-10-28 15:30:10.142158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-10-28 15:30:10.142280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-10-28 15:30:10.142307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-10-28 15:30:10.142426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-10-28 15:30:10.142453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-10-28 15:30:10.142543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-10-28 15:30:10.142569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-10-28 15:30:10.142695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-10-28 15:30:10.142737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-10-28 15:30:10.142870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-10-28 15:30:10.142899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-10-28 15:30:10.143005] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:23.371 [2024-10-28 15:30:10.143054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-10-28 15:30:10.143080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-10-28 15:30:10.143201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-10-28 15:30:10.143228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-10-28 15:30:10.143377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-10-28 15:30:10.143403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-10-28 15:30:10.143490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-10-28 15:30:10.143516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-10-28 15:30:10.143641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-10-28 15:30:10.143674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-10-28 15:30:10.143791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-10-28 15:30:10.143819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-10-28 15:30:10.143941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-10-28 15:30:10.143967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-10-28 15:30:10.144095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-10-28 15:30:10.144123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-10-28 15:30:10.144244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-10-28 15:30:10.144271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-10-28 15:30:10.144399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-10-28 15:30:10.144426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-10-28 15:30:10.144511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-10-28 15:30:10.144538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-10-28 15:30:10.144691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-10-28 15:30:10.144719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-10-28 15:30:10.144840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-10-28 15:30:10.144867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-10-28 15:30:10.144991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-10-28 15:30:10.145018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-10-28 15:30:10.145148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-10-28 15:30:10.145175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.145267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.145293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.145420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.145447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.145573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.145600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.145687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.145714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.145811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.145838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.145964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.145991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.146144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.146170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.146294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.146321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.146444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.146471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.146591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.146618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.146724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.146751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.146878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.146905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.147052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.147080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.147172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.147199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.147320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.147347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.147471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.147498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.147623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.147658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.147763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.147789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.147895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.147923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.148048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.148075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.148202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.148228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.148353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.148380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.148504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.148537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.148677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.148705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.148808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.148835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.148961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.148989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.149111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.149138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.149283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.149311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.149413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.149440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.149606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.149633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.149789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.149816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.149925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.149952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.150069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.150097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.150224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.150252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.150383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.150410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.150514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.150541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.150682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.150719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.150820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.150847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.150999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.151026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-10-28 15:30:10.151172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-10-28 15:30:10.151199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.373 15:30:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.373 15:30:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:23.373 [2024-10-28 15:30:10.151373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.151405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 15:30:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.373 [2024-10-28 15:30:10.151516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.151544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 15:30:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:23.373 [2024-10-28 15:30:10.151727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.151755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.151872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.151899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.152036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.152063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.152158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.152185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.152344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.152371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.152584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.152616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.152788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.152816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.152917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.152956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.153123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.153171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.153326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.153354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.153481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.153508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.153628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.153660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.153806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.153851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.154021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.154048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.154179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.154207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.154445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.154473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.154615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.154643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.154788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.154815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.155003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.155030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.155168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.155196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.155323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.155361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.155514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.155552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.155691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.155730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.155853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.155918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.156105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.156152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.156288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.156333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.156434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.156461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.156633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.156669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.156824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.156851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.156961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.156991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.157192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.157236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.157338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.157366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.157556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.157585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.157749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.157805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.158072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.158127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.158271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-10-28 15:30:10.158303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-10-28 15:30:10.158512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-10-28 15:30:10.158542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-10-28 15:30:10.158689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-10-28 15:30:10.158720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-10-28 15:30:10.158902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-10-28 15:30:10.158943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-10-28 15:30:10.159197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-10-28 15:30:10.159230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea50000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 15:30:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.374 [2024-10-28 15:30:10.159392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-10-28 15:30:10.159454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 15:30:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:23.374 [2024-10-28 15:30:10.159676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 15:30:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.374 [2024-10-28 15:30:10.159708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.635 15:30:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:23.635 [2024-10-28 15:30:10.159819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.159847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.160001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.160052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.160235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.160281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.160466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.160512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.160658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.160697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.160837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.160882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.161076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.161122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.161259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.161305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.161493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.161521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.161690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.161718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.161835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.161881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.162113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.162141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.162351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.162395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.162539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.162566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.162754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.162800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.162933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.162986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.163135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.163180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.163307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.163352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.163576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.163604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.163801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.163848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.164074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.164119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.164278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.164323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.164449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.164476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.164602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.164629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.164732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.164760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.164889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.164916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.165045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.165072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.165322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.165350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.165527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.165555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.165723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.165752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.165873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.165901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.166053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.166080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.166170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.166197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.166398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.166425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.166613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.166655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.166775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.166820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.166999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.167026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 [2024-10-28 15:30:10.167135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.167182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 15:30:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.635 [2024-10-28 15:30:10.167392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.167438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 15:30:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:23.635 [2024-10-28 15:30:10.167624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.167659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 15:30:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.635 [2024-10-28 15:30:10.167798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.635 [2024-10-28 15:30:10.167843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.635 qpair failed and we were unable to recover it. 00:34:23.635 15:30:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:23.636 [2024-10-28 15:30:10.168067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.636 [2024-10-28 15:30:10.168098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.636 qpair failed and we were unable to recover it. 00:34:23.636 [2024-10-28 15:30:10.168302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.636 [2024-10-28 15:30:10.168346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.636 qpair failed and we were unable to recover it. 00:34:23.636 [2024-10-28 15:30:10.168508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.636 [2024-10-28 15:30:10.168540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.636 qpair failed and we were unable to recover it. 00:34:23.636 [2024-10-28 15:30:10.168624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.636 [2024-10-28 15:30:10.168657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.636 qpair failed and we were unable to recover it. 00:34:23.636 [2024-10-28 15:30:10.168782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.636 [2024-10-28 15:30:10.168827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.636 qpair failed and we were unable to recover it. 00:34:23.636 [2024-10-28 15:30:10.168973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.636 [2024-10-28 15:30:10.169022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.636 qpair failed and we were unable to recover it. 00:34:23.636 [2024-10-28 15:30:10.169197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.636 [2024-10-28 15:30:10.169244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.636 qpair failed and we were unable to recover it. 00:34:23.636 [2024-10-28 15:30:10.169414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.636 [2024-10-28 15:30:10.169441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.636 qpair failed and we were unable to recover it. 00:34:23.636 [2024-10-28 15:30:10.169677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.636 [2024-10-28 15:30:10.169705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.636 qpair failed and we were unable to recover it. 00:34:23.636 [2024-10-28 15:30:10.169812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.636 [2024-10-28 15:30:10.169842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.636 qpair failed and we were unable to recover it. 00:34:23.636 [2024-10-28 15:30:10.170052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.636 [2024-10-28 15:30:10.170096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.636 qpair failed and we were unable to recover it. 00:34:23.636 [2024-10-28 15:30:10.170259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.636 [2024-10-28 15:30:10.170305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.636 qpair failed and we were unable to recover it. 00:34:23.636 [2024-10-28 15:30:10.170407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.636 [2024-10-28 15:30:10.170434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.636 qpair failed and we were unable to recover it. 00:34:23.636 [2024-10-28 15:30:10.170599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.636 [2024-10-28 15:30:10.170627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.636 qpair failed and we were unable to recover it. 00:34:23.636 [2024-10-28 15:30:10.170771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.636 [2024-10-28 15:30:10.170817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.636 qpair failed and we were unable to recover it. 00:34:23.636 [2024-10-28 15:30:10.171003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.636 [2024-10-28 15:30:10.171057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.636 qpair failed and we were unable to recover it. 00:34:23.636 [2024-10-28 15:30:10.171200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.636 [2024-10-28 15:30:10.171244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea44000b90 with addr=10.0.0.2, port=4420 00:34:23.636 qpair failed and we were unable to recover it. 00:34:23.636 [2024-10-28 15:30:10.171362] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:23.636 [2024-10-28 15:30:10.173868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.636 [2024-10-28 15:30:10.174011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.636 [2024-10-28 15:30:10.174040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.636 [2024-10-28 15:30:10.174056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.636 [2024-10-28 15:30:10.174070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.636 [2024-10-28 15:30:10.174108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.636 qpair failed and we were unable to recover it. 00:34:23.636 15:30:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.636 15:30:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:23.636 15:30:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.636 15:30:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:23.636 15:30:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.636 15:30:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3318941 00:34:23.636 [2024-10-28 15:30:10.183725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.636 [2024-10-28 15:30:10.183823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.636 [2024-10-28 15:30:10.183851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.636 [2024-10-28 15:30:10.183867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.636 [2024-10-28 15:30:10.183890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.636 [2024-10-28 15:30:10.183933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.636 qpair failed and we were unable to recover it. 00:34:23.636 [2024-10-28 15:30:10.193731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.636 [2024-10-28 15:30:10.193825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.636 [2024-10-28 15:30:10.193854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.636 [2024-10-28 15:30:10.193869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.636 [2024-10-28 15:30:10.193882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.636 [2024-10-28 15:30:10.193924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.636 qpair failed and we were unable to recover it. 00:34:23.636 [2024-10-28 15:30:10.203747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.636 [2024-10-28 15:30:10.203853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.636 [2024-10-28 15:30:10.203881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.636 [2024-10-28 15:30:10.203896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.636 [2024-10-28 15:30:10.203910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.636 [2024-10-28 15:30:10.203943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.636 qpair failed and we were unable to recover it. 00:34:23.636 [2024-10-28 15:30:10.213699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.636 [2024-10-28 15:30:10.213797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.636 [2024-10-28 15:30:10.213823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.636 [2024-10-28 15:30:10.213839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.636 [2024-10-28 15:30:10.213863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.636 [2024-10-28 15:30:10.213895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.636 qpair failed and we were unable to recover it. 00:34:23.636 [2024-10-28 15:30:10.223682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.636 [2024-10-28 15:30:10.223774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.636 [2024-10-28 15:30:10.223802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.636 [2024-10-28 15:30:10.223818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.636 [2024-10-28 15:30:10.223832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.636 [2024-10-28 15:30:10.223863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.636 qpair failed and we were unable to recover it. 00:34:23.636 [2024-10-28 15:30:10.233661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.636 [2024-10-28 15:30:10.233794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.636 [2024-10-28 15:30:10.233822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.636 [2024-10-28 15:30:10.233839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.636 [2024-10-28 15:30:10.233852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.637 [2024-10-28 15:30:10.233884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.637 qpair failed and we were unable to recover it. 00:34:23.637 [2024-10-28 15:30:10.243730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.637 [2024-10-28 15:30:10.243827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.637 [2024-10-28 15:30:10.243856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.637 [2024-10-28 15:30:10.243872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.637 [2024-10-28 15:30:10.243885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.637 [2024-10-28 15:30:10.243918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.637 qpair failed and we were unable to recover it. 00:34:23.637 [2024-10-28 15:30:10.253793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.637 [2024-10-28 15:30:10.253891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.637 [2024-10-28 15:30:10.253918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.637 [2024-10-28 15:30:10.253934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.637 [2024-10-28 15:30:10.253947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.637 [2024-10-28 15:30:10.253980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.637 qpair failed and we were unable to recover it. 00:34:23.637 [2024-10-28 15:30:10.263802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.637 [2024-10-28 15:30:10.263898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.637 [2024-10-28 15:30:10.263939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.637 [2024-10-28 15:30:10.263955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.637 [2024-10-28 15:30:10.263969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.637 [2024-10-28 15:30:10.264007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.637 qpair failed and we were unable to recover it. 00:34:23.637 [2024-10-28 15:30:10.273822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.637 [2024-10-28 15:30:10.273913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.637 [2024-10-28 15:30:10.273946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.637 [2024-10-28 15:30:10.273962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.637 [2024-10-28 15:30:10.273977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.637 [2024-10-28 15:30:10.274009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.637 qpair failed and we were unable to recover it. 00:34:23.637 [2024-10-28 15:30:10.283841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.637 [2024-10-28 15:30:10.283938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.637 [2024-10-28 15:30:10.283966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.637 [2024-10-28 15:30:10.283981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.637 [2024-10-28 15:30:10.283994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.637 [2024-10-28 15:30:10.284027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.637 qpair failed and we were unable to recover it. 00:34:23.637 [2024-10-28 15:30:10.293837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.637 [2024-10-28 15:30:10.293937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.637 [2024-10-28 15:30:10.293964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.637 [2024-10-28 15:30:10.293979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.637 [2024-10-28 15:30:10.293992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.637 [2024-10-28 15:30:10.294025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.637 qpair failed and we were unable to recover it. 00:34:23.637 [2024-10-28 15:30:10.303856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.637 [2024-10-28 15:30:10.303943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.637 [2024-10-28 15:30:10.303970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.637 [2024-10-28 15:30:10.303985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.637 [2024-10-28 15:30:10.303998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.637 [2024-10-28 15:30:10.304031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.637 qpair failed and we were unable to recover it. 00:34:23.637 [2024-10-28 15:30:10.313877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.637 [2024-10-28 15:30:10.313969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.637 [2024-10-28 15:30:10.313997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.637 [2024-10-28 15:30:10.314012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.637 [2024-10-28 15:30:10.314032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.637 [2024-10-28 15:30:10.314065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.637 qpair failed and we were unable to recover it. 00:34:23.637 [2024-10-28 15:30:10.323911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.637 [2024-10-28 15:30:10.324002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.637 [2024-10-28 15:30:10.324029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.637 [2024-10-28 15:30:10.324045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.637 [2024-10-28 15:30:10.324058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.637 [2024-10-28 15:30:10.324089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.637 qpair failed and we were unable to recover it. 00:34:23.637 [2024-10-28 15:30:10.333929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.637 [2024-10-28 15:30:10.334021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.637 [2024-10-28 15:30:10.334048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.637 [2024-10-28 15:30:10.334064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.637 [2024-10-28 15:30:10.334077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.637 [2024-10-28 15:30:10.334108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.637 qpair failed and we were unable to recover it. 00:34:23.637 [2024-10-28 15:30:10.343993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.637 [2024-10-28 15:30:10.344076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.637 [2024-10-28 15:30:10.344102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.637 [2024-10-28 15:30:10.344116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.637 [2024-10-28 15:30:10.344130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.637 [2024-10-28 15:30:10.344161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.637 qpair failed and we were unable to recover it. 00:34:23.637 [2024-10-28 15:30:10.354042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.637 [2024-10-28 15:30:10.354132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.637 [2024-10-28 15:30:10.354160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.637 [2024-10-28 15:30:10.354175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.637 [2024-10-28 15:30:10.354188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.637 [2024-10-28 15:30:10.354220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.637 qpair failed and we were unable to recover it. 00:34:23.637 [2024-10-28 15:30:10.364124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.637 [2024-10-28 15:30:10.364218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.637 [2024-10-28 15:30:10.364244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.637 [2024-10-28 15:30:10.364259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.637 [2024-10-28 15:30:10.364272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.637 [2024-10-28 15:30:10.364302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.637 qpair failed and we were unable to recover it. 00:34:23.637 [2024-10-28 15:30:10.374083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.637 [2024-10-28 15:30:10.374169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.637 [2024-10-28 15:30:10.374195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.637 [2024-10-28 15:30:10.374210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.638 [2024-10-28 15:30:10.374222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.638 [2024-10-28 15:30:10.374254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.638 qpair failed and we were unable to recover it. 00:34:23.638 [2024-10-28 15:30:10.384121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.638 [2024-10-28 15:30:10.384204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.638 [2024-10-28 15:30:10.384230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.638 [2024-10-28 15:30:10.384245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.638 [2024-10-28 15:30:10.384257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.638 [2024-10-28 15:30:10.384288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.638 qpair failed and we were unable to recover it. 00:34:23.638 [2024-10-28 15:30:10.394150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.638 [2024-10-28 15:30:10.394252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.638 [2024-10-28 15:30:10.394278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.638 [2024-10-28 15:30:10.394293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.638 [2024-10-28 15:30:10.394306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.638 [2024-10-28 15:30:10.394338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.638 qpair failed and we were unable to recover it. 00:34:23.638 [2024-10-28 15:30:10.404172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.638 [2024-10-28 15:30:10.404268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.638 [2024-10-28 15:30:10.404301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.638 [2024-10-28 15:30:10.404317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.638 [2024-10-28 15:30:10.404330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.638 [2024-10-28 15:30:10.404361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.638 qpair failed and we were unable to recover it. 00:34:23.638 [2024-10-28 15:30:10.414239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.638 [2024-10-28 15:30:10.414331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.638 [2024-10-28 15:30:10.414357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.638 [2024-10-28 15:30:10.414372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.638 [2024-10-28 15:30:10.414385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.638 [2024-10-28 15:30:10.414430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.638 qpair failed and we were unable to recover it. 00:34:23.638 [2024-10-28 15:30:10.424223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.638 [2024-10-28 15:30:10.424313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.638 [2024-10-28 15:30:10.424340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.638 [2024-10-28 15:30:10.424354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.638 [2024-10-28 15:30:10.424367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.638 [2024-10-28 15:30:10.424397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.638 qpair failed and we were unable to recover it. 00:34:23.638 [2024-10-28 15:30:10.434234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.638 [2024-10-28 15:30:10.434319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.638 [2024-10-28 15:30:10.434345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.638 [2024-10-28 15:30:10.434360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.638 [2024-10-28 15:30:10.434372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.638 [2024-10-28 15:30:10.434403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.638 qpair failed and we were unable to recover it. 00:34:23.638 [2024-10-28 15:30:10.444269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.638 [2024-10-28 15:30:10.444364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.638 [2024-10-28 15:30:10.444392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.638 [2024-10-28 15:30:10.444414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.638 [2024-10-28 15:30:10.444429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.638 [2024-10-28 15:30:10.444461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.638 qpair failed and we were unable to recover it. 00:34:23.638 [2024-10-28 15:30:10.454302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.638 [2024-10-28 15:30:10.454389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.638 [2024-10-28 15:30:10.454416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.638 [2024-10-28 15:30:10.454432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.638 [2024-10-28 15:30:10.454444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.638 [2024-10-28 15:30:10.454477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.638 qpair failed and we were unable to recover it. 00:34:23.638 [2024-10-28 15:30:10.464326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.638 [2024-10-28 15:30:10.464416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.638 [2024-10-28 15:30:10.464443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.638 [2024-10-28 15:30:10.464458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.638 [2024-10-28 15:30:10.464471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.638 [2024-10-28 15:30:10.464502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.638 qpair failed and we were unable to recover it. 00:34:23.638 [2024-10-28 15:30:10.474331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.638 [2024-10-28 15:30:10.474426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.638 [2024-10-28 15:30:10.474452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.638 [2024-10-28 15:30:10.474466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.638 [2024-10-28 15:30:10.474479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.638 [2024-10-28 15:30:10.474510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.638 qpair failed and we were unable to recover it. 00:34:23.638 [2024-10-28 15:30:10.484391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.638 [2024-10-28 15:30:10.484497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.638 [2024-10-28 15:30:10.484523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.638 [2024-10-28 15:30:10.484538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.638 [2024-10-28 15:30:10.484550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.638 [2024-10-28 15:30:10.484587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.638 qpair failed and we were unable to recover it. 00:34:23.638 [2024-10-28 15:30:10.494435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.638 [2024-10-28 15:30:10.494544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.638 [2024-10-28 15:30:10.494577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.638 [2024-10-28 15:30:10.494594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.638 [2024-10-28 15:30:10.494608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.638 [2024-10-28 15:30:10.494642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.638 qpair failed and we were unable to recover it. 00:34:23.899 [2024-10-28 15:30:10.504453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.899 [2024-10-28 15:30:10.504541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.899 [2024-10-28 15:30:10.504568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.899 [2024-10-28 15:30:10.504583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.899 [2024-10-28 15:30:10.504595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.899 [2024-10-28 15:30:10.504641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-10-28 15:30:10.514430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.899 [2024-10-28 15:30:10.514517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.899 [2024-10-28 15:30:10.514545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.899 [2024-10-28 15:30:10.514561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.899 [2024-10-28 15:30:10.514573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.899 [2024-10-28 15:30:10.514604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-10-28 15:30:10.524483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.899 [2024-10-28 15:30:10.524584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.899 [2024-10-28 15:30:10.524610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.899 [2024-10-28 15:30:10.524624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.899 [2024-10-28 15:30:10.524665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.899 [2024-10-28 15:30:10.524704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-10-28 15:30:10.534529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.899 [2024-10-28 15:30:10.534647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.899 [2024-10-28 15:30:10.534692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.899 [2024-10-28 15:30:10.534708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.899 [2024-10-28 15:30:10.534721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.899 [2024-10-28 15:30:10.534753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-10-28 15:30:10.544514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.899 [2024-10-28 15:30:10.544600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.899 [2024-10-28 15:30:10.544626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.899 [2024-10-28 15:30:10.544641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.899 [2024-10-28 15:30:10.544677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.899 [2024-10-28 15:30:10.544719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-10-28 15:30:10.554538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.899 [2024-10-28 15:30:10.554667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.899 [2024-10-28 15:30:10.554694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.899 [2024-10-28 15:30:10.554709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.899 [2024-10-28 15:30:10.554722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.899 [2024-10-28 15:30:10.554754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-10-28 15:30:10.564602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.899 [2024-10-28 15:30:10.564733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.899 [2024-10-28 15:30:10.564760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.899 [2024-10-28 15:30:10.564775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.899 [2024-10-28 15:30:10.564787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.899 [2024-10-28 15:30:10.564819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-10-28 15:30:10.574675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.899 [2024-10-28 15:30:10.574760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.899 [2024-10-28 15:30:10.574786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.899 [2024-10-28 15:30:10.574807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.899 [2024-10-28 15:30:10.574823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.899 [2024-10-28 15:30:10.574854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-10-28 15:30:10.584707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.899 [2024-10-28 15:30:10.584793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.899 [2024-10-28 15:30:10.584820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.899 [2024-10-28 15:30:10.584836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.899 [2024-10-28 15:30:10.584849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.899 [2024-10-28 15:30:10.584881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-10-28 15:30:10.594700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.899 [2024-10-28 15:30:10.594794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.899 [2024-10-28 15:30:10.594821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.899 [2024-10-28 15:30:10.594836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.899 [2024-10-28 15:30:10.594849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.899 [2024-10-28 15:30:10.594883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-10-28 15:30:10.604731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.899 [2024-10-28 15:30:10.604866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.899 [2024-10-28 15:30:10.604893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.899 [2024-10-28 15:30:10.604909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.900 [2024-10-28 15:30:10.604922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.900 [2024-10-28 15:30:10.604954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-10-28 15:30:10.614730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.900 [2024-10-28 15:30:10.614824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.900 [2024-10-28 15:30:10.614851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.900 [2024-10-28 15:30:10.614866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.900 [2024-10-28 15:30:10.614879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.900 [2024-10-28 15:30:10.614916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-10-28 15:30:10.624756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.900 [2024-10-28 15:30:10.624843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.900 [2024-10-28 15:30:10.624869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.900 [2024-10-28 15:30:10.624884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.900 [2024-10-28 15:30:10.624897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.900 [2024-10-28 15:30:10.624929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-10-28 15:30:10.634798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.900 [2024-10-28 15:30:10.634899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.900 [2024-10-28 15:30:10.634925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.900 [2024-10-28 15:30:10.634955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.900 [2024-10-28 15:30:10.634968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.900 [2024-10-28 15:30:10.634999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-10-28 15:30:10.644896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.900 [2024-10-28 15:30:10.645049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.900 [2024-10-28 15:30:10.645075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.900 [2024-10-28 15:30:10.645098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.900 [2024-10-28 15:30:10.645111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.900 [2024-10-28 15:30:10.645143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-10-28 15:30:10.654857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.900 [2024-10-28 15:30:10.654968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.900 [2024-10-28 15:30:10.654994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.900 [2024-10-28 15:30:10.655009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.900 [2024-10-28 15:30:10.655021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.900 [2024-10-28 15:30:10.655051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-10-28 15:30:10.664948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.900 [2024-10-28 15:30:10.665034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.900 [2024-10-28 15:30:10.665059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.900 [2024-10-28 15:30:10.665074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.900 [2024-10-28 15:30:10.665086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.900 [2024-10-28 15:30:10.665116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-10-28 15:30:10.674952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.900 [2024-10-28 15:30:10.675052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.900 [2024-10-28 15:30:10.675076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.900 [2024-10-28 15:30:10.675091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.900 [2024-10-28 15:30:10.675103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.900 [2024-10-28 15:30:10.675133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-10-28 15:30:10.684964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.900 [2024-10-28 15:30:10.685083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.900 [2024-10-28 15:30:10.685109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.900 [2024-10-28 15:30:10.685124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.900 [2024-10-28 15:30:10.685136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.900 [2024-10-28 15:30:10.685167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-10-28 15:30:10.694990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.900 [2024-10-28 15:30:10.695078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.900 [2024-10-28 15:30:10.695105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.900 [2024-10-28 15:30:10.695120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.900 [2024-10-28 15:30:10.695132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.900 [2024-10-28 15:30:10.695163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-10-28 15:30:10.705040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.900 [2024-10-28 15:30:10.705133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.900 [2024-10-28 15:30:10.705166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.900 [2024-10-28 15:30:10.705181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.900 [2024-10-28 15:30:10.705194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.900 [2024-10-28 15:30:10.705224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-10-28 15:30:10.715064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.900 [2024-10-28 15:30:10.715153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.900 [2024-10-28 15:30:10.715178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.900 [2024-10-28 15:30:10.715193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.900 [2024-10-28 15:30:10.715205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.900 [2024-10-28 15:30:10.715236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-10-28 15:30:10.725118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.900 [2024-10-28 15:30:10.725212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.900 [2024-10-28 15:30:10.725236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.900 [2024-10-28 15:30:10.725250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.900 [2024-10-28 15:30:10.725262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.900 [2024-10-28 15:30:10.725292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-10-28 15:30:10.735170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.900 [2024-10-28 15:30:10.735269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.900 [2024-10-28 15:30:10.735294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.900 [2024-10-28 15:30:10.735310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.900 [2024-10-28 15:30:10.735323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.900 [2024-10-28 15:30:10.735354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-10-28 15:30:10.745147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.900 [2024-10-28 15:30:10.745233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.901 [2024-10-28 15:30:10.745259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.901 [2024-10-28 15:30:10.745273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.901 [2024-10-28 15:30:10.745293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.901 [2024-10-28 15:30:10.745323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 [2024-10-28 15:30:10.755210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.901 [2024-10-28 15:30:10.755301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.901 [2024-10-28 15:30:10.755325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.901 [2024-10-28 15:30:10.755340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.901 [2024-10-28 15:30:10.755353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:23.901 [2024-10-28 15:30:10.755384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.901 qpair failed and we were unable to recover it. 00:34:24.160 [2024-10-28 15:30:10.765211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.160 [2024-10-28 15:30:10.765331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.160 [2024-10-28 15:30:10.765359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.160 [2024-10-28 15:30:10.765375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.160 [2024-10-28 15:30:10.765387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.160 [2024-10-28 15:30:10.765419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-10-28 15:30:10.775226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.160 [2024-10-28 15:30:10.775314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.160 [2024-10-28 15:30:10.775342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.160 [2024-10-28 15:30:10.775357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.160 [2024-10-28 15:30:10.775369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.160 [2024-10-28 15:30:10.775400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-10-28 15:30:10.785262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.160 [2024-10-28 15:30:10.785379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.160 [2024-10-28 15:30:10.785406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.160 [2024-10-28 15:30:10.785421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.160 [2024-10-28 15:30:10.785434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.160 [2024-10-28 15:30:10.785465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-10-28 15:30:10.795269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.160 [2024-10-28 15:30:10.795363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.160 [2024-10-28 15:30:10.795390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.160 [2024-10-28 15:30:10.795405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.160 [2024-10-28 15:30:10.795417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.160 [2024-10-28 15:30:10.795447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-10-28 15:30:10.805310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.160 [2024-10-28 15:30:10.805396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.160 [2024-10-28 15:30:10.805422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.160 [2024-10-28 15:30:10.805437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.160 [2024-10-28 15:30:10.805450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.160 [2024-10-28 15:30:10.805481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-10-28 15:30:10.815394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.160 [2024-10-28 15:30:10.815490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.160 [2024-10-28 15:30:10.815517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.160 [2024-10-28 15:30:10.815532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.160 [2024-10-28 15:30:10.815545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.160 [2024-10-28 15:30:10.815576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-10-28 15:30:10.825418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.160 [2024-10-28 15:30:10.825502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.160 [2024-10-28 15:30:10.825528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.160 [2024-10-28 15:30:10.825543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.160 [2024-10-28 15:30:10.825556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.160 [2024-10-28 15:30:10.825586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-10-28 15:30:10.835375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.160 [2024-10-28 15:30:10.835459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.160 [2024-10-28 15:30:10.835491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.160 [2024-10-28 15:30:10.835507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.160 [2024-10-28 15:30:10.835519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.160 [2024-10-28 15:30:10.835551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-10-28 15:30:10.845450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.160 [2024-10-28 15:30:10.845540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.160 [2024-10-28 15:30:10.845564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.160 [2024-10-28 15:30:10.845578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.160 [2024-10-28 15:30:10.845590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.160 [2024-10-28 15:30:10.845620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-10-28 15:30:10.855416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.160 [2024-10-28 15:30:10.855518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.160 [2024-10-28 15:30:10.855544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.160 [2024-10-28 15:30:10.855558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.160 [2024-10-28 15:30:10.855571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.160 [2024-10-28 15:30:10.855601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.161 [2024-10-28 15:30:10.865455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.161 [2024-10-28 15:30:10.865541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.161 [2024-10-28 15:30:10.865567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.161 [2024-10-28 15:30:10.865582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.161 [2024-10-28 15:30:10.865594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.161 [2024-10-28 15:30:10.865623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-10-28 15:30:10.875521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.161 [2024-10-28 15:30:10.875603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.161 [2024-10-28 15:30:10.875627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.161 [2024-10-28 15:30:10.875666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.161 [2024-10-28 15:30:10.875686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.161 [2024-10-28 15:30:10.875718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-10-28 15:30:10.885669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.161 [2024-10-28 15:30:10.885776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.161 [2024-10-28 15:30:10.885803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.161 [2024-10-28 15:30:10.885817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.161 [2024-10-28 15:30:10.885830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.161 [2024-10-28 15:30:10.885862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-10-28 15:30:10.895668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.161 [2024-10-28 15:30:10.895773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.161 [2024-10-28 15:30:10.895800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.161 [2024-10-28 15:30:10.895815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.161 [2024-10-28 15:30:10.895833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.161 [2024-10-28 15:30:10.895865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-10-28 15:30:10.905670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.161 [2024-10-28 15:30:10.905773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.161 [2024-10-28 15:30:10.905800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.161 [2024-10-28 15:30:10.905815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.161 [2024-10-28 15:30:10.905828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.161 [2024-10-28 15:30:10.905868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-10-28 15:30:10.915710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.161 [2024-10-28 15:30:10.915797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.161 [2024-10-28 15:30:10.915824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.161 [2024-10-28 15:30:10.915840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.161 [2024-10-28 15:30:10.915853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.161 [2024-10-28 15:30:10.915892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-10-28 15:30:10.925698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.161 [2024-10-28 15:30:10.925810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.161 [2024-10-28 15:30:10.925836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.161 [2024-10-28 15:30:10.925851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.161 [2024-10-28 15:30:10.925864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.161 [2024-10-28 15:30:10.925897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-10-28 15:30:10.935715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.161 [2024-10-28 15:30:10.935806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.161 [2024-10-28 15:30:10.935833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.161 [2024-10-28 15:30:10.935848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.161 [2024-10-28 15:30:10.935861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.161 [2024-10-28 15:30:10.935899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-10-28 15:30:10.945700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.161 [2024-10-28 15:30:10.945788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.161 [2024-10-28 15:30:10.945815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.161 [2024-10-28 15:30:10.945830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.161 [2024-10-28 15:30:10.945843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.161 [2024-10-28 15:30:10.945874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-10-28 15:30:10.955739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.161 [2024-10-28 15:30:10.955833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.161 [2024-10-28 15:30:10.955857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.161 [2024-10-28 15:30:10.955872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.161 [2024-10-28 15:30:10.955885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.161 [2024-10-28 15:30:10.955917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-10-28 15:30:10.965810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.161 [2024-10-28 15:30:10.965931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.161 [2024-10-28 15:30:10.965971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.161 [2024-10-28 15:30:10.965993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.161 [2024-10-28 15:30:10.966005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.161 [2024-10-28 15:30:10.966037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-10-28 15:30:10.975843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.161 [2024-10-28 15:30:10.975955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.161 [2024-10-28 15:30:10.975981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.161 [2024-10-28 15:30:10.975995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.161 [2024-10-28 15:30:10.976008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.161 [2024-10-28 15:30:10.976039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-10-28 15:30:10.985857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.161 [2024-10-28 15:30:10.985964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.161 [2024-10-28 15:30:10.985989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.161 [2024-10-28 15:30:10.986004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.161 [2024-10-28 15:30:10.986017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.161 [2024-10-28 15:30:10.986047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-10-28 15:30:10.995972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.161 [2024-10-28 15:30:10.996086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.161 [2024-10-28 15:30:10.996110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.161 [2024-10-28 15:30:10.996125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.161 [2024-10-28 15:30:10.996139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.162 [2024-10-28 15:30:10.996170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-10-28 15:30:11.005888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.162 [2024-10-28 15:30:11.005998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.162 [2024-10-28 15:30:11.006022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.162 [2024-10-28 15:30:11.006042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.162 [2024-10-28 15:30:11.006055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.162 [2024-10-28 15:30:11.006087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-10-28 15:30:11.015899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.162 [2024-10-28 15:30:11.016003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.162 [2024-10-28 15:30:11.016029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.162 [2024-10-28 15:30:11.016044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.162 [2024-10-28 15:30:11.016056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.162 [2024-10-28 15:30:11.016087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.420 [2024-10-28 15:30:11.025959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.420 [2024-10-28 15:30:11.026053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.420 [2024-10-28 15:30:11.026080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.420 [2024-10-28 15:30:11.026095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.420 [2024-10-28 15:30:11.026109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.420 [2024-10-28 15:30:11.026142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.420 qpair failed and we were unable to recover it. 00:34:24.420 [2024-10-28 15:30:11.036009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.421 [2024-10-28 15:30:11.036093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.421 [2024-10-28 15:30:11.036119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.421 [2024-10-28 15:30:11.036135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.421 [2024-10-28 15:30:11.036147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.421 [2024-10-28 15:30:11.036179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.421 qpair failed and we were unable to recover it. 00:34:24.421 [2024-10-28 15:30:11.046257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.421 [2024-10-28 15:30:11.046368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.421 [2024-10-28 15:30:11.046395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.421 [2024-10-28 15:30:11.046411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.421 [2024-10-28 15:30:11.046425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.421 [2024-10-28 15:30:11.046480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.421 qpair failed and we were unable to recover it. 00:34:24.421 [2024-10-28 15:30:11.056089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.421 [2024-10-28 15:30:11.056208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.421 [2024-10-28 15:30:11.056235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.421 [2024-10-28 15:30:11.056250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.421 [2024-10-28 15:30:11.056263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.421 [2024-10-28 15:30:11.056294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.421 qpair failed and we were unable to recover it. 00:34:24.421 [2024-10-28 15:30:11.066134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.421 [2024-10-28 15:30:11.066219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.421 [2024-10-28 15:30:11.066245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.421 [2024-10-28 15:30:11.066260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.421 [2024-10-28 15:30:11.066272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.421 [2024-10-28 15:30:11.066303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.421 qpair failed and we were unable to recover it. 00:34:24.421 [2024-10-28 15:30:11.076097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.421 [2024-10-28 15:30:11.076183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.421 [2024-10-28 15:30:11.076209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.421 [2024-10-28 15:30:11.076224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.421 [2024-10-28 15:30:11.076237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.421 [2024-10-28 15:30:11.076266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.421 qpair failed and we were unable to recover it. 00:34:24.421 [2024-10-28 15:30:11.086126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.421 [2024-10-28 15:30:11.086215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.421 [2024-10-28 15:30:11.086241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.421 [2024-10-28 15:30:11.086257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.421 [2024-10-28 15:30:11.086270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.421 [2024-10-28 15:30:11.086301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.421 qpair failed and we were unable to recover it. 00:34:24.421 [2024-10-28 15:30:11.096182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.421 [2024-10-28 15:30:11.096282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.421 [2024-10-28 15:30:11.096308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.421 [2024-10-28 15:30:11.096323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.421 [2024-10-28 15:30:11.096336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.421 [2024-10-28 15:30:11.096376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.421 qpair failed and we were unable to recover it. 00:34:24.421 [2024-10-28 15:30:11.106146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.421 [2024-10-28 15:30:11.106228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.421 [2024-10-28 15:30:11.106253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.421 [2024-10-28 15:30:11.106267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.421 [2024-10-28 15:30:11.106281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.421 [2024-10-28 15:30:11.106311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.421 qpair failed and we were unable to recover it. 00:34:24.421 [2024-10-28 15:30:11.116195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.421 [2024-10-28 15:30:11.116284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.421 [2024-10-28 15:30:11.116310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.421 [2024-10-28 15:30:11.116325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.421 [2024-10-28 15:30:11.116337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.421 [2024-10-28 15:30:11.116367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.421 qpair failed and we were unable to recover it. 00:34:24.421 [2024-10-28 15:30:11.126259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.421 [2024-10-28 15:30:11.126360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.421 [2024-10-28 15:30:11.126386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.421 [2024-10-28 15:30:11.126400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.421 [2024-10-28 15:30:11.126413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.421 [2024-10-28 15:30:11.126444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.421 qpair failed and we were unable to recover it. 00:34:24.421 [2024-10-28 15:30:11.136339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.421 [2024-10-28 15:30:11.136425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.421 [2024-10-28 15:30:11.136452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.421 [2024-10-28 15:30:11.136473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.421 [2024-10-28 15:30:11.136487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.421 [2024-10-28 15:30:11.136518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.421 qpair failed and we were unable to recover it. 00:34:24.421 [2024-10-28 15:30:11.146311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.421 [2024-10-28 15:30:11.146396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.421 [2024-10-28 15:30:11.146421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.421 [2024-10-28 15:30:11.146436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.421 [2024-10-28 15:30:11.146450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.421 [2024-10-28 15:30:11.146480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.421 qpair failed and we were unable to recover it. 00:34:24.421 [2024-10-28 15:30:11.156327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.421 [2024-10-28 15:30:11.156422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.421 [2024-10-28 15:30:11.156449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.421 [2024-10-28 15:30:11.156466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.421 [2024-10-28 15:30:11.156479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.421 [2024-10-28 15:30:11.156511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.421 qpair failed and we were unable to recover it. 00:34:24.421 [2024-10-28 15:30:11.166369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.422 [2024-10-28 15:30:11.166464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.422 [2024-10-28 15:30:11.166491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.422 [2024-10-28 15:30:11.166506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.422 [2024-10-28 15:30:11.166519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.422 [2024-10-28 15:30:11.166550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.422 qpair failed and we were unable to recover it. 00:34:24.422 [2024-10-28 15:30:11.176431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.422 [2024-10-28 15:30:11.176538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.422 [2024-10-28 15:30:11.176565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.422 [2024-10-28 15:30:11.176580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.422 [2024-10-28 15:30:11.176593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.422 [2024-10-28 15:30:11.176630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.422 qpair failed and we were unable to recover it. 00:34:24.422 [2024-10-28 15:30:11.186396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.422 [2024-10-28 15:30:11.186486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.422 [2024-10-28 15:30:11.186511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.422 [2024-10-28 15:30:11.186525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.422 [2024-10-28 15:30:11.186538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.422 [2024-10-28 15:30:11.186568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.422 qpair failed and we were unable to recover it. 00:34:24.422 [2024-10-28 15:30:11.196389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.422 [2024-10-28 15:30:11.196474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.422 [2024-10-28 15:30:11.196500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.422 [2024-10-28 15:30:11.196515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.422 [2024-10-28 15:30:11.196527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.422 [2024-10-28 15:30:11.196559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.422 qpair failed and we were unable to recover it. 00:34:24.422 [2024-10-28 15:30:11.206462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.422 [2024-10-28 15:30:11.206555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.422 [2024-10-28 15:30:11.206581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.422 [2024-10-28 15:30:11.206596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.422 [2024-10-28 15:30:11.206609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.422 [2024-10-28 15:30:11.206664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.422 qpair failed and we were unable to recover it. 00:34:24.422 [2024-10-28 15:30:11.216480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.422 [2024-10-28 15:30:11.216564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.422 [2024-10-28 15:30:11.216589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.422 [2024-10-28 15:30:11.216603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.422 [2024-10-28 15:30:11.216615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.422 [2024-10-28 15:30:11.216668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.422 qpair failed and we were unable to recover it. 00:34:24.422 [2024-10-28 15:30:11.226480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.422 [2024-10-28 15:30:11.226563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.422 [2024-10-28 15:30:11.226587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.422 [2024-10-28 15:30:11.226601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.422 [2024-10-28 15:30:11.226614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.422 [2024-10-28 15:30:11.226669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.422 qpair failed and we were unable to recover it. 00:34:24.422 [2024-10-28 15:30:11.236548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.422 [2024-10-28 15:30:11.236629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.422 [2024-10-28 15:30:11.236676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.422 [2024-10-28 15:30:11.236693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.422 [2024-10-28 15:30:11.236706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.422 [2024-10-28 15:30:11.236738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.422 qpair failed and we were unable to recover it. 00:34:24.422 [2024-10-28 15:30:11.246616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.422 [2024-10-28 15:30:11.246734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.422 [2024-10-28 15:30:11.246759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.422 [2024-10-28 15:30:11.246774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.422 [2024-10-28 15:30:11.246787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.422 [2024-10-28 15:30:11.246818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.422 qpair failed and we were unable to recover it. 00:34:24.422 [2024-10-28 15:30:11.256623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.422 [2024-10-28 15:30:11.256765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.422 [2024-10-28 15:30:11.256793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.422 [2024-10-28 15:30:11.256809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.422 [2024-10-28 15:30:11.256821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.422 [2024-10-28 15:30:11.256853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.422 qpair failed and we were unable to recover it. 00:34:24.422 [2024-10-28 15:30:11.266647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.422 [2024-10-28 15:30:11.266747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.422 [2024-10-28 15:30:11.266779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.422 [2024-10-28 15:30:11.266795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.422 [2024-10-28 15:30:11.266808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.422 [2024-10-28 15:30:11.266849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.422 qpair failed and we were unable to recover it. 00:34:24.422 [2024-10-28 15:30:11.276690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.423 [2024-10-28 15:30:11.276777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.423 [2024-10-28 15:30:11.276804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.423 [2024-10-28 15:30:11.276820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.423 [2024-10-28 15:30:11.276833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.423 [2024-10-28 15:30:11.276864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.423 qpair failed and we were unable to recover it. 00:34:24.682 [2024-10-28 15:30:11.286707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.682 [2024-10-28 15:30:11.286801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.682 [2024-10-28 15:30:11.286830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.682 [2024-10-28 15:30:11.286846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.682 [2024-10-28 15:30:11.286859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.682 [2024-10-28 15:30:11.286891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-10-28 15:30:11.296781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.682 [2024-10-28 15:30:11.296876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.682 [2024-10-28 15:30:11.296902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.682 [2024-10-28 15:30:11.296917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.682 [2024-10-28 15:30:11.296930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.682 [2024-10-28 15:30:11.296979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-10-28 15:30:11.306770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.682 [2024-10-28 15:30:11.306863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.682 [2024-10-28 15:30:11.306891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.682 [2024-10-28 15:30:11.306907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.682 [2024-10-28 15:30:11.306926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.682 [2024-10-28 15:30:11.306958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-10-28 15:30:11.316808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.682 [2024-10-28 15:30:11.316896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.682 [2024-10-28 15:30:11.316923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.682 [2024-10-28 15:30:11.316938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.682 [2024-10-28 15:30:11.316951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.682 [2024-10-28 15:30:11.316999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-10-28 15:30:11.326885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.682 [2024-10-28 15:30:11.326987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.682 [2024-10-28 15:30:11.327013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.682 [2024-10-28 15:30:11.327028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.682 [2024-10-28 15:30:11.327040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.682 [2024-10-28 15:30:11.327080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-10-28 15:30:11.336924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.682 [2024-10-28 15:30:11.337040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.682 [2024-10-28 15:30:11.337065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.682 [2024-10-28 15:30:11.337080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.682 [2024-10-28 15:30:11.337093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.682 [2024-10-28 15:30:11.337123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.683 [2024-10-28 15:30:11.346894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.683 [2024-10-28 15:30:11.346990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.683 [2024-10-28 15:30:11.347018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.683 [2024-10-28 15:30:11.347034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.683 [2024-10-28 15:30:11.347047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.683 [2024-10-28 15:30:11.347079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-10-28 15:30:11.356898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.683 [2024-10-28 15:30:11.357002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.683 [2024-10-28 15:30:11.357029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.683 [2024-10-28 15:30:11.357044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.683 [2024-10-28 15:30:11.357057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.683 [2024-10-28 15:30:11.357099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-10-28 15:30:11.366987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.683 [2024-10-28 15:30:11.367078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.683 [2024-10-28 15:30:11.367104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.683 [2024-10-28 15:30:11.367119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.683 [2024-10-28 15:30:11.367131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.683 [2024-10-28 15:30:11.367172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-10-28 15:30:11.377001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.683 [2024-10-28 15:30:11.377089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.683 [2024-10-28 15:30:11.377115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.683 [2024-10-28 15:30:11.377129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.683 [2024-10-28 15:30:11.377141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.683 [2024-10-28 15:30:11.377172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-10-28 15:30:11.387069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.683 [2024-10-28 15:30:11.387166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.683 [2024-10-28 15:30:11.387192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.683 [2024-10-28 15:30:11.387206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.683 [2024-10-28 15:30:11.387218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.683 [2024-10-28 15:30:11.387249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-10-28 15:30:11.397057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.683 [2024-10-28 15:30:11.397143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.683 [2024-10-28 15:30:11.397174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.683 [2024-10-28 15:30:11.397190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.683 [2024-10-28 15:30:11.397202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.683 [2024-10-28 15:30:11.397233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-10-28 15:30:11.407072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.683 [2024-10-28 15:30:11.407165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.683 [2024-10-28 15:30:11.407191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.683 [2024-10-28 15:30:11.407206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.683 [2024-10-28 15:30:11.407218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.683 [2024-10-28 15:30:11.407248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-10-28 15:30:11.417106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.683 [2024-10-28 15:30:11.417195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.683 [2024-10-28 15:30:11.417219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.683 [2024-10-28 15:30:11.417233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.683 [2024-10-28 15:30:11.417247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.683 [2024-10-28 15:30:11.417277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-10-28 15:30:11.427169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.683 [2024-10-28 15:30:11.427300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.683 [2024-10-28 15:30:11.427326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.683 [2024-10-28 15:30:11.427341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.683 [2024-10-28 15:30:11.427354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.683 [2024-10-28 15:30:11.427383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-10-28 15:30:11.437168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.683 [2024-10-28 15:30:11.437252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.683 [2024-10-28 15:30:11.437278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.683 [2024-10-28 15:30:11.437293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.683 [2024-10-28 15:30:11.437315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.683 [2024-10-28 15:30:11.437346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-10-28 15:30:11.447245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.683 [2024-10-28 15:30:11.447366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.683 [2024-10-28 15:30:11.447393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.683 [2024-10-28 15:30:11.447409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.683 [2024-10-28 15:30:11.447421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.683 [2024-10-28 15:30:11.447451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-10-28 15:30:11.457219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.683 [2024-10-28 15:30:11.457304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.683 [2024-10-28 15:30:11.457329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.683 [2024-10-28 15:30:11.457344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.683 [2024-10-28 15:30:11.457356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.683 [2024-10-28 15:30:11.457388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-10-28 15:30:11.467193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.683 [2024-10-28 15:30:11.467281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.683 [2024-10-28 15:30:11.467308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.683 [2024-10-28 15:30:11.467323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.683 [2024-10-28 15:30:11.467335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.683 [2024-10-28 15:30:11.467364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-10-28 15:30:11.477282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.683 [2024-10-28 15:30:11.477418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.683 [2024-10-28 15:30:11.477443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.683 [2024-10-28 15:30:11.477458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.683 [2024-10-28 15:30:11.477476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.684 [2024-10-28 15:30:11.477507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-10-28 15:30:11.487267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.684 [2024-10-28 15:30:11.487357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.684 [2024-10-28 15:30:11.487383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.684 [2024-10-28 15:30:11.487398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.684 [2024-10-28 15:30:11.487411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.684 [2024-10-28 15:30:11.487441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-10-28 15:30:11.497298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.684 [2024-10-28 15:30:11.497385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.684 [2024-10-28 15:30:11.497409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.684 [2024-10-28 15:30:11.497423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.684 [2024-10-28 15:30:11.497435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.684 [2024-10-28 15:30:11.497466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-10-28 15:30:11.507302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.684 [2024-10-28 15:30:11.507390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.684 [2024-10-28 15:30:11.507417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.684 [2024-10-28 15:30:11.507433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.684 [2024-10-28 15:30:11.507446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.684 [2024-10-28 15:30:11.507477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-10-28 15:30:11.517363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.684 [2024-10-28 15:30:11.517485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.684 [2024-10-28 15:30:11.517511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.684 [2024-10-28 15:30:11.517526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.684 [2024-10-28 15:30:11.517539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.684 [2024-10-28 15:30:11.517578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-10-28 15:30:11.527410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.684 [2024-10-28 15:30:11.527510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.684 [2024-10-28 15:30:11.527534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.684 [2024-10-28 15:30:11.527549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.684 [2024-10-28 15:30:11.527562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.684 [2024-10-28 15:30:11.527592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-10-28 15:30:11.537375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.684 [2024-10-28 15:30:11.537463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.684 [2024-10-28 15:30:11.537487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.684 [2024-10-28 15:30:11.537501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.684 [2024-10-28 15:30:11.537514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.684 [2024-10-28 15:30:11.537544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.943 [2024-10-28 15:30:11.547434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.943 [2024-10-28 15:30:11.547571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.943 [2024-10-28 15:30:11.547600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.943 [2024-10-28 15:30:11.547616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.943 [2024-10-28 15:30:11.547630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.943 [2024-10-28 15:30:11.547670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.943 qpair failed and we were unable to recover it. 00:34:24.943 [2024-10-28 15:30:11.557490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.943 [2024-10-28 15:30:11.557580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.943 [2024-10-28 15:30:11.557607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.943 [2024-10-28 15:30:11.557621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.943 [2024-10-28 15:30:11.557633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.943 [2024-10-28 15:30:11.557686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.943 qpair failed and we were unable to recover it. 00:34:24.944 [2024-10-28 15:30:11.567520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.944 [2024-10-28 15:30:11.567613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.944 [2024-10-28 15:30:11.567664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.944 [2024-10-28 15:30:11.567688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.944 [2024-10-28 15:30:11.567704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.944 [2024-10-28 15:30:11.567738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.944 qpair failed and we were unable to recover it. 00:34:24.944 [2024-10-28 15:30:11.577546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.944 [2024-10-28 15:30:11.577677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.944 [2024-10-28 15:30:11.577705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.944 [2024-10-28 15:30:11.577720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.944 [2024-10-28 15:30:11.577732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.944 [2024-10-28 15:30:11.577764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.944 qpair failed and we were unable to recover it. 00:34:24.944 [2024-10-28 15:30:11.587509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.944 [2024-10-28 15:30:11.587663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.944 [2024-10-28 15:30:11.587701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.944 [2024-10-28 15:30:11.587717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.944 [2024-10-28 15:30:11.587730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.944 [2024-10-28 15:30:11.587763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.944 qpair failed and we were unable to recover it. 00:34:24.944 [2024-10-28 15:30:11.597555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.944 [2024-10-28 15:30:11.597678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.944 [2024-10-28 15:30:11.597711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.944 [2024-10-28 15:30:11.597727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.944 [2024-10-28 15:30:11.597740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.944 [2024-10-28 15:30:11.597772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.944 qpair failed and we were unable to recover it. 00:34:24.944 [2024-10-28 15:30:11.607585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.944 [2024-10-28 15:30:11.607698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.944 [2024-10-28 15:30:11.607726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.944 [2024-10-28 15:30:11.607741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.944 [2024-10-28 15:30:11.607754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.944 [2024-10-28 15:30:11.607791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.944 qpair failed and we were unable to recover it. 00:34:24.944 [2024-10-28 15:30:11.617643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.944 [2024-10-28 15:30:11.617783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.944 [2024-10-28 15:30:11.617809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.944 [2024-10-28 15:30:11.617824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.944 [2024-10-28 15:30:11.617837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.944 [2024-10-28 15:30:11.617870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.944 qpair failed and we were unable to recover it. 00:34:24.944 [2024-10-28 15:30:11.627692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.944 [2024-10-28 15:30:11.627788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.944 [2024-10-28 15:30:11.627814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.944 [2024-10-28 15:30:11.627829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.944 [2024-10-28 15:30:11.627841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.944 [2024-10-28 15:30:11.627873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.944 qpair failed and we were unable to recover it. 00:34:24.944 [2024-10-28 15:30:11.637708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.944 [2024-10-28 15:30:11.637792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.944 [2024-10-28 15:30:11.637818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.944 [2024-10-28 15:30:11.637832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.944 [2024-10-28 15:30:11.637847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.944 [2024-10-28 15:30:11.637877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.944 qpair failed and we were unable to recover it. 00:34:24.944 [2024-10-28 15:30:11.647772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.944 [2024-10-28 15:30:11.647891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.944 [2024-10-28 15:30:11.647922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.944 [2024-10-28 15:30:11.647937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.944 [2024-10-28 15:30:11.647950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.944 [2024-10-28 15:30:11.647981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.944 qpair failed and we were unable to recover it. 00:34:24.944 [2024-10-28 15:30:11.657732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.944 [2024-10-28 15:30:11.657826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.944 [2024-10-28 15:30:11.657852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.944 [2024-10-28 15:30:11.657867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.944 [2024-10-28 15:30:11.657879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.944 [2024-10-28 15:30:11.657908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.944 qpair failed and we were unable to recover it. 00:34:24.944 [2024-10-28 15:30:11.667788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.944 [2024-10-28 15:30:11.667877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.944 [2024-10-28 15:30:11.667903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.944 [2024-10-28 15:30:11.667919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.944 [2024-10-28 15:30:11.667932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.944 [2024-10-28 15:30:11.667978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.944 qpair failed and we were unable to recover it. 00:34:24.944 [2024-10-28 15:30:11.677799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.944 [2024-10-28 15:30:11.677888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.944 [2024-10-28 15:30:11.677915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.944 [2024-10-28 15:30:11.677945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.944 [2024-10-28 15:30:11.677959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.944 [2024-10-28 15:30:11.677989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.944 qpair failed and we were unable to recover it. 00:34:24.944 [2024-10-28 15:30:11.687847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.944 [2024-10-28 15:30:11.687940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.944 [2024-10-28 15:30:11.687980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.944 [2024-10-28 15:30:11.687996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.944 [2024-10-28 15:30:11.688009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.944 [2024-10-28 15:30:11.688050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.944 qpair failed and we were unable to recover it. 00:34:24.944 [2024-10-28 15:30:11.697856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.945 [2024-10-28 15:30:11.697958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.945 [2024-10-28 15:30:11.698003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.945 [2024-10-28 15:30:11.698020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.945 [2024-10-28 15:30:11.698033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.945 [2024-10-28 15:30:11.698065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.945 qpair failed and we were unable to recover it. 00:34:24.945 [2024-10-28 15:30:11.707914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.945 [2024-10-28 15:30:11.708019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.945 [2024-10-28 15:30:11.708043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.945 [2024-10-28 15:30:11.708057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.945 [2024-10-28 15:30:11.708069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.945 [2024-10-28 15:30:11.708098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.945 qpair failed and we were unable to recover it. 00:34:24.945 [2024-10-28 15:30:11.717989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.945 [2024-10-28 15:30:11.718074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.945 [2024-10-28 15:30:11.718100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.945 [2024-10-28 15:30:11.718114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.945 [2024-10-28 15:30:11.718126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.945 [2024-10-28 15:30:11.718157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.945 qpair failed and we were unable to recover it. 00:34:24.945 [2024-10-28 15:30:11.728031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.945 [2024-10-28 15:30:11.728145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.945 [2024-10-28 15:30:11.728171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.945 [2024-10-28 15:30:11.728185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.945 [2024-10-28 15:30:11.728197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.945 [2024-10-28 15:30:11.728227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.945 qpair failed and we were unable to recover it. 00:34:24.945 [2024-10-28 15:30:11.738033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.945 [2024-10-28 15:30:11.738122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.945 [2024-10-28 15:30:11.738148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.945 [2024-10-28 15:30:11.738172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.945 [2024-10-28 15:30:11.738185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.945 [2024-10-28 15:30:11.738220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.945 qpair failed and we were unable to recover it. 00:34:24.945 [2024-10-28 15:30:11.748062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.945 [2024-10-28 15:30:11.748148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.945 [2024-10-28 15:30:11.748172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.945 [2024-10-28 15:30:11.748186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.945 [2024-10-28 15:30:11.748198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.945 [2024-10-28 15:30:11.748228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.945 qpair failed and we were unable to recover it. 00:34:24.945 [2024-10-28 15:30:11.758082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.945 [2024-10-28 15:30:11.758172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.945 [2024-10-28 15:30:11.758196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.945 [2024-10-28 15:30:11.758211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.945 [2024-10-28 15:30:11.758224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.945 [2024-10-28 15:30:11.758254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.945 qpair failed and we were unable to recover it. 00:34:24.945 [2024-10-28 15:30:11.768111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.945 [2024-10-28 15:30:11.768200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.945 [2024-10-28 15:30:11.768225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.945 [2024-10-28 15:30:11.768240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.945 [2024-10-28 15:30:11.768252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.945 [2024-10-28 15:30:11.768283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.945 qpair failed and we were unable to recover it. 00:34:24.945 [2024-10-28 15:30:11.778129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.945 [2024-10-28 15:30:11.778216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.945 [2024-10-28 15:30:11.778241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.945 [2024-10-28 15:30:11.778256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.945 [2024-10-28 15:30:11.778268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.945 [2024-10-28 15:30:11.778298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.945 qpair failed and we were unable to recover it. 00:34:24.945 [2024-10-28 15:30:11.788103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.945 [2024-10-28 15:30:11.788190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.945 [2024-10-28 15:30:11.788216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.945 [2024-10-28 15:30:11.788231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.945 [2024-10-28 15:30:11.788243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.945 [2024-10-28 15:30:11.788273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.945 qpair failed and we were unable to recover it. 00:34:24.945 [2024-10-28 15:30:11.798225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.945 [2024-10-28 15:30:11.798308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.945 [2024-10-28 15:30:11.798332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.945 [2024-10-28 15:30:11.798347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.945 [2024-10-28 15:30:11.798359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.945 [2024-10-28 15:30:11.798389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.945 qpair failed and we were unable to recover it. 00:34:24.945 [2024-10-28 15:30:11.808278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.945 [2024-10-28 15:30:11.808385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.945 [2024-10-28 15:30:11.808422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.945 [2024-10-28 15:30:11.808446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.945 [2024-10-28 15:30:11.808460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:24.945 [2024-10-28 15:30:11.808495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.945 qpair failed and we were unable to recover it. 00:34:25.205 [2024-10-28 15:30:11.818297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.205 [2024-10-28 15:30:11.818411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.205 [2024-10-28 15:30:11.818437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.205 [2024-10-28 15:30:11.818452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.205 [2024-10-28 15:30:11.818466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.205 [2024-10-28 15:30:11.818498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-10-28 15:30:11.828278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.205 [2024-10-28 15:30:11.828396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.205 [2024-10-28 15:30:11.828428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.205 [2024-10-28 15:30:11.828445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.205 [2024-10-28 15:30:11.828458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.205 [2024-10-28 15:30:11.828489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-10-28 15:30:11.838332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.205 [2024-10-28 15:30:11.838459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.205 [2024-10-28 15:30:11.838486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.205 [2024-10-28 15:30:11.838502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.205 [2024-10-28 15:30:11.838515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.206 [2024-10-28 15:30:11.838545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-10-28 15:30:11.848335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.206 [2024-10-28 15:30:11.848429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.206 [2024-10-28 15:30:11.848454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.206 [2024-10-28 15:30:11.848468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.206 [2024-10-28 15:30:11.848481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.206 [2024-10-28 15:30:11.848511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-10-28 15:30:11.858321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.206 [2024-10-28 15:30:11.858449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.206 [2024-10-28 15:30:11.858477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.206 [2024-10-28 15:30:11.858492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.206 [2024-10-28 15:30:11.858504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.206 [2024-10-28 15:30:11.858534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-10-28 15:30:11.868346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.206 [2024-10-28 15:30:11.868433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.206 [2024-10-28 15:30:11.868459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.206 [2024-10-28 15:30:11.868473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.206 [2024-10-28 15:30:11.868491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.206 [2024-10-28 15:30:11.868524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-10-28 15:30:11.878371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.206 [2024-10-28 15:30:11.878451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.206 [2024-10-28 15:30:11.878477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.206 [2024-10-28 15:30:11.878492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.206 [2024-10-28 15:30:11.878504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.206 [2024-10-28 15:30:11.878535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-10-28 15:30:11.888453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.206 [2024-10-28 15:30:11.888545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.206 [2024-10-28 15:30:11.888569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.206 [2024-10-28 15:30:11.888583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.206 [2024-10-28 15:30:11.888596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.206 [2024-10-28 15:30:11.888626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-10-28 15:30:11.898454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.206 [2024-10-28 15:30:11.898562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.206 [2024-10-28 15:30:11.898586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.206 [2024-10-28 15:30:11.898600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.206 [2024-10-28 15:30:11.898613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.206 [2024-10-28 15:30:11.898667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-10-28 15:30:11.908463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.206 [2024-10-28 15:30:11.908552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.206 [2024-10-28 15:30:11.908576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.206 [2024-10-28 15:30:11.908591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.206 [2024-10-28 15:30:11.908603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.206 [2024-10-28 15:30:11.908648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-10-28 15:30:11.918488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.206 [2024-10-28 15:30:11.918567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.206 [2024-10-28 15:30:11.918593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.206 [2024-10-28 15:30:11.918609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.206 [2024-10-28 15:30:11.918621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.206 [2024-10-28 15:30:11.918675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-10-28 15:30:11.928506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.206 [2024-10-28 15:30:11.928647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.206 [2024-10-28 15:30:11.928680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.206 [2024-10-28 15:30:11.928696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.206 [2024-10-28 15:30:11.928708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.206 [2024-10-28 15:30:11.928740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-10-28 15:30:11.938583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.206 [2024-10-28 15:30:11.938719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.206 [2024-10-28 15:30:11.938755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.206 [2024-10-28 15:30:11.938772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.206 [2024-10-28 15:30:11.938798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.206 [2024-10-28 15:30:11.938830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-10-28 15:30:11.948605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.206 [2024-10-28 15:30:11.948710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.206 [2024-10-28 15:30:11.948737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.206 [2024-10-28 15:30:11.948752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.206 [2024-10-28 15:30:11.948766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.206 [2024-10-28 15:30:11.948807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-10-28 15:30:11.958698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.206 [2024-10-28 15:30:11.958792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.206 [2024-10-28 15:30:11.958824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.206 [2024-10-28 15:30:11.958840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.206 [2024-10-28 15:30:11.958853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.206 [2024-10-28 15:30:11.958884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-10-28 15:30:11.968698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.206 [2024-10-28 15:30:11.968805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.206 [2024-10-28 15:30:11.968830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.206 [2024-10-28 15:30:11.968846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.206 [2024-10-28 15:30:11.968859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.206 [2024-10-28 15:30:11.968891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-10-28 15:30:11.978698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.206 [2024-10-28 15:30:11.978811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.206 [2024-10-28 15:30:11.978845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.207 [2024-10-28 15:30:11.978861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.207 [2024-10-28 15:30:11.978873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.207 [2024-10-28 15:30:11.978906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-10-28 15:30:11.988726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.207 [2024-10-28 15:30:11.988813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.207 [2024-10-28 15:30:11.988839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.207 [2024-10-28 15:30:11.988854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.207 [2024-10-28 15:30:11.988867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.207 [2024-10-28 15:30:11.988898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-10-28 15:30:11.998739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.207 [2024-10-28 15:30:11.998873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.207 [2024-10-28 15:30:11.998901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.207 [2024-10-28 15:30:11.998922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.207 [2024-10-28 15:30:11.998936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.207 [2024-10-28 15:30:11.998982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-10-28 15:30:12.008754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.207 [2024-10-28 15:30:12.008847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.207 [2024-10-28 15:30:12.008873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.207 [2024-10-28 15:30:12.008889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.207 [2024-10-28 15:30:12.008901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.207 [2024-10-28 15:30:12.008949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-10-28 15:30:12.018829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.207 [2024-10-28 15:30:12.018923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.207 [2024-10-28 15:30:12.018948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.207 [2024-10-28 15:30:12.018978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.207 [2024-10-28 15:30:12.018992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.207 [2024-10-28 15:30:12.019022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-10-28 15:30:12.028857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.207 [2024-10-28 15:30:12.028991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.207 [2024-10-28 15:30:12.029018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.207 [2024-10-28 15:30:12.029033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.207 [2024-10-28 15:30:12.029046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.207 [2024-10-28 15:30:12.029077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-10-28 15:30:12.038824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.207 [2024-10-28 15:30:12.038917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.207 [2024-10-28 15:30:12.038941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.207 [2024-10-28 15:30:12.038955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.207 [2024-10-28 15:30:12.038968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.207 [2024-10-28 15:30:12.039015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-10-28 15:30:12.048978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.207 [2024-10-28 15:30:12.049083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.207 [2024-10-28 15:30:12.049107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.207 [2024-10-28 15:30:12.049121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.207 [2024-10-28 15:30:12.049133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.207 [2024-10-28 15:30:12.049163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-10-28 15:30:12.058899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.207 [2024-10-28 15:30:12.058987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.207 [2024-10-28 15:30:12.059028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.207 [2024-10-28 15:30:12.059043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.207 [2024-10-28 15:30:12.059055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.207 [2024-10-28 15:30:12.059086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-10-28 15:30:12.068941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.207 [2024-10-28 15:30:12.069085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.207 [2024-10-28 15:30:12.069114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.207 [2024-10-28 15:30:12.069130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.207 [2024-10-28 15:30:12.069144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.207 [2024-10-28 15:30:12.069177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.467 [2024-10-28 15:30:12.078997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.467 [2024-10-28 15:30:12.079087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.467 [2024-10-28 15:30:12.079116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.467 [2024-10-28 15:30:12.079132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.467 [2024-10-28 15:30:12.079145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.467 [2024-10-28 15:30:12.079177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.467 qpair failed and we were unable to recover it. 00:34:25.467 [2024-10-28 15:30:12.089044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.467 [2024-10-28 15:30:12.089166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.467 [2024-10-28 15:30:12.089191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.467 [2024-10-28 15:30:12.089206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.467 [2024-10-28 15:30:12.089218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.467 [2024-10-28 15:30:12.089250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.467 qpair failed and we were unable to recover it. 00:34:25.467 [2024-10-28 15:30:12.099058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.467 [2024-10-28 15:30:12.099145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.467 [2024-10-28 15:30:12.099172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.468 [2024-10-28 15:30:12.099187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.468 [2024-10-28 15:30:12.099200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.468 [2024-10-28 15:30:12.099231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.468 qpair failed and we were unable to recover it. 00:34:25.468 [2024-10-28 15:30:12.109025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.468 [2024-10-28 15:30:12.109113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.468 [2024-10-28 15:30:12.109138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.468 [2024-10-28 15:30:12.109152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.468 [2024-10-28 15:30:12.109165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.468 [2024-10-28 15:30:12.109196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.468 qpair failed and we were unable to recover it. 00:34:25.468 [2024-10-28 15:30:12.119096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.468 [2024-10-28 15:30:12.119181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.468 [2024-10-28 15:30:12.119206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.468 [2024-10-28 15:30:12.119220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.468 [2024-10-28 15:30:12.119232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.468 [2024-10-28 15:30:12.119261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.468 qpair failed and we were unable to recover it. 00:34:25.468 [2024-10-28 15:30:12.129145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.468 [2024-10-28 15:30:12.129234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.468 [2024-10-28 15:30:12.129259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.468 [2024-10-28 15:30:12.129279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.468 [2024-10-28 15:30:12.129291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.468 [2024-10-28 15:30:12.129322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.468 qpair failed and we were unable to recover it. 00:34:25.468 [2024-10-28 15:30:12.139191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.468 [2024-10-28 15:30:12.139318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.468 [2024-10-28 15:30:12.139345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.468 [2024-10-28 15:30:12.139360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.468 [2024-10-28 15:30:12.139373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.468 [2024-10-28 15:30:12.139403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.468 qpair failed and we were unable to recover it. 00:34:25.468 [2024-10-28 15:30:12.149134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.468 [2024-10-28 15:30:12.149224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.468 [2024-10-28 15:30:12.149250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.468 [2024-10-28 15:30:12.149265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.468 [2024-10-28 15:30:12.149277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.468 [2024-10-28 15:30:12.149307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.468 qpair failed and we were unable to recover it. 00:34:25.468 [2024-10-28 15:30:12.159174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.468 [2024-10-28 15:30:12.159295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.468 [2024-10-28 15:30:12.159322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.468 [2024-10-28 15:30:12.159337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.468 [2024-10-28 15:30:12.159351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.468 [2024-10-28 15:30:12.159381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.468 qpair failed and we were unable to recover it. 00:34:25.468 [2024-10-28 15:30:12.169355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.468 [2024-10-28 15:30:12.169474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.468 [2024-10-28 15:30:12.169498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.468 [2024-10-28 15:30:12.169513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.468 [2024-10-28 15:30:12.169526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.468 [2024-10-28 15:30:12.169564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.468 qpair failed and we were unable to recover it. 00:34:25.468 [2024-10-28 15:30:12.179296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.468 [2024-10-28 15:30:12.179380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.468 [2024-10-28 15:30:12.179405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.468 [2024-10-28 15:30:12.179419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.468 [2024-10-28 15:30:12.179432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.468 [2024-10-28 15:30:12.179462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.468 qpair failed and we were unable to recover it. 00:34:25.468 [2024-10-28 15:30:12.189284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.468 [2024-10-28 15:30:12.189386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.468 [2024-10-28 15:30:12.189412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.468 [2024-10-28 15:30:12.189427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.468 [2024-10-28 15:30:12.189439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.468 [2024-10-28 15:30:12.189480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.468 qpair failed and we were unable to recover it. 00:34:25.468 [2024-10-28 15:30:12.199308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.468 [2024-10-28 15:30:12.199392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.468 [2024-10-28 15:30:12.199416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.468 [2024-10-28 15:30:12.199430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.468 [2024-10-28 15:30:12.199442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.468 [2024-10-28 15:30:12.199473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.468 qpair failed and we were unable to recover it. 00:34:25.468 [2024-10-28 15:30:12.209320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.469 [2024-10-28 15:30:12.209435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.469 [2024-10-28 15:30:12.209461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.469 [2024-10-28 15:30:12.209476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.469 [2024-10-28 15:30:12.209488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.469 [2024-10-28 15:30:12.209518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.469 qpair failed and we were unable to recover it. 00:34:25.469 [2024-10-28 15:30:12.219430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.469 [2024-10-28 15:30:12.219519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.469 [2024-10-28 15:30:12.219544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.469 [2024-10-28 15:30:12.219559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.469 [2024-10-28 15:30:12.219572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.469 [2024-10-28 15:30:12.219603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.469 qpair failed and we were unable to recover it. 00:34:25.469 [2024-10-28 15:30:12.229425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.469 [2024-10-28 15:30:12.229534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.469 [2024-10-28 15:30:12.229561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.469 [2024-10-28 15:30:12.229576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.469 [2024-10-28 15:30:12.229588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.469 [2024-10-28 15:30:12.229619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.469 qpair failed and we were unable to recover it. 00:34:25.469 [2024-10-28 15:30:12.239425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.469 [2024-10-28 15:30:12.239522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.469 [2024-10-28 15:30:12.239548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.469 [2024-10-28 15:30:12.239564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.469 [2024-10-28 15:30:12.239578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.469 [2024-10-28 15:30:12.239608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.469 qpair failed and we were unable to recover it. 00:34:25.469 [2024-10-28 15:30:12.249499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.469 [2024-10-28 15:30:12.249592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.469 [2024-10-28 15:30:12.249618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.469 [2024-10-28 15:30:12.249656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.469 [2024-10-28 15:30:12.249672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.469 [2024-10-28 15:30:12.249714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.469 qpair failed and we were unable to recover it. 00:34:25.469 [2024-10-28 15:30:12.259515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.469 [2024-10-28 15:30:12.259665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.469 [2024-10-28 15:30:12.259700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.469 [2024-10-28 15:30:12.259717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.469 [2024-10-28 15:30:12.259731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.469 [2024-10-28 15:30:12.259763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.469 qpair failed and we were unable to recover it. 00:34:25.469 [2024-10-28 15:30:12.269517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.469 [2024-10-28 15:30:12.269606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.469 [2024-10-28 15:30:12.269646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.469 [2024-10-28 15:30:12.269675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.469 [2024-10-28 15:30:12.269688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.469 [2024-10-28 15:30:12.269722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.469 qpair failed and we were unable to recover it. 00:34:25.469 [2024-10-28 15:30:12.279566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.469 [2024-10-28 15:30:12.279680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.469 [2024-10-28 15:30:12.279707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.469 [2024-10-28 15:30:12.279724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.469 [2024-10-28 15:30:12.279737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.469 [2024-10-28 15:30:12.279769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.469 qpair failed and we were unable to recover it. 00:34:25.469 [2024-10-28 15:30:12.289537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.469 [2024-10-28 15:30:12.289672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.469 [2024-10-28 15:30:12.289699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.469 [2024-10-28 15:30:12.289715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.469 [2024-10-28 15:30:12.289728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.469 [2024-10-28 15:30:12.289761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.469 qpair failed and we were unable to recover it. 00:34:25.469 [2024-10-28 15:30:12.299577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.469 [2024-10-28 15:30:12.299683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.469 [2024-10-28 15:30:12.299710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.469 [2024-10-28 15:30:12.299725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.469 [2024-10-28 15:30:12.299739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.469 [2024-10-28 15:30:12.299778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.469 qpair failed and we were unable to recover it. 00:34:25.469 [2024-10-28 15:30:12.309618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.469 [2024-10-28 15:30:12.309765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.469 [2024-10-28 15:30:12.309792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.469 [2024-10-28 15:30:12.309807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.469 [2024-10-28 15:30:12.309819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.469 [2024-10-28 15:30:12.309852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.469 qpair failed and we were unable to recover it. 00:34:25.469 [2024-10-28 15:30:12.319669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.469 [2024-10-28 15:30:12.319757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.469 [2024-10-28 15:30:12.319784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.469 [2024-10-28 15:30:12.319799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.469 [2024-10-28 15:30:12.319812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.469 [2024-10-28 15:30:12.319844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.469 qpair failed and we were unable to recover it. 00:34:25.469 [2024-10-28 15:30:12.329699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.469 [2024-10-28 15:30:12.329834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.469 [2024-10-28 15:30:12.329863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.469 [2024-10-28 15:30:12.329878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.469 [2024-10-28 15:30:12.329892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.469 [2024-10-28 15:30:12.329936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.469 qpair failed and we were unable to recover it. 00:34:25.729 [2024-10-28 15:30:12.339765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.729 [2024-10-28 15:30:12.339853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.729 [2024-10-28 15:30:12.339881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.729 [2024-10-28 15:30:12.339908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.729 [2024-10-28 15:30:12.339921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.729 [2024-10-28 15:30:12.339970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.729 qpair failed and we were unable to recover it. 00:34:25.729 [2024-10-28 15:30:12.349715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.729 [2024-10-28 15:30:12.349802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.729 [2024-10-28 15:30:12.349830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.729 [2024-10-28 15:30:12.349845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.729 [2024-10-28 15:30:12.349858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.729 [2024-10-28 15:30:12.349901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.729 qpair failed and we were unable to recover it. 00:34:25.729 [2024-10-28 15:30:12.359754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.729 [2024-10-28 15:30:12.359887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.729 [2024-10-28 15:30:12.359915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.729 [2024-10-28 15:30:12.359930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.729 [2024-10-28 15:30:12.359943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.729 [2024-10-28 15:30:12.359990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.729 qpair failed and we were unable to recover it. 00:34:25.729 [2024-10-28 15:30:12.369808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.729 [2024-10-28 15:30:12.369947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.729 [2024-10-28 15:30:12.369974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.729 [2024-10-28 15:30:12.369989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.729 [2024-10-28 15:30:12.370002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.729 [2024-10-28 15:30:12.370033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.729 qpair failed and we were unable to recover it. 00:34:25.729 [2024-10-28 15:30:12.379828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.729 [2024-10-28 15:30:12.379950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.729 [2024-10-28 15:30:12.379976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.729 [2024-10-28 15:30:12.379992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.729 [2024-10-28 15:30:12.380004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.729 [2024-10-28 15:30:12.380036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.729 qpair failed and we were unable to recover it. 00:34:25.729 [2024-10-28 15:30:12.389828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.729 [2024-10-28 15:30:12.389920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.729 [2024-10-28 15:30:12.389952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.729 [2024-10-28 15:30:12.389984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.729 [2024-10-28 15:30:12.389997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.729 [2024-10-28 15:30:12.390029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.729 qpair failed and we were unable to recover it. 00:34:25.729 [2024-10-28 15:30:12.399927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.729 [2024-10-28 15:30:12.400028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.729 [2024-10-28 15:30:12.400054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.729 [2024-10-28 15:30:12.400069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.729 [2024-10-28 15:30:12.400082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.729 [2024-10-28 15:30:12.400114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.729 qpair failed and we were unable to recover it. 00:34:25.729 [2024-10-28 15:30:12.409873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.729 [2024-10-28 15:30:12.409968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.729 [2024-10-28 15:30:12.409994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.729 [2024-10-28 15:30:12.410010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.729 [2024-10-28 15:30:12.410022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.729 [2024-10-28 15:30:12.410055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.729 qpair failed and we were unable to recover it. 00:34:25.729 [2024-10-28 15:30:12.419968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.729 [2024-10-28 15:30:12.420067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.729 [2024-10-28 15:30:12.420092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.730 [2024-10-28 15:30:12.420107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.730 [2024-10-28 15:30:12.420120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.730 [2024-10-28 15:30:12.420150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.730 qpair failed and we were unable to recover it. 00:34:25.730 [2024-10-28 15:30:12.429995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.730 [2024-10-28 15:30:12.430088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.730 [2024-10-28 15:30:12.430113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.730 [2024-10-28 15:30:12.430127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.730 [2024-10-28 15:30:12.430145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.730 [2024-10-28 15:30:12.430176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.730 qpair failed and we were unable to recover it. 00:34:25.730 [2024-10-28 15:30:12.440004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.730 [2024-10-28 15:30:12.440089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.730 [2024-10-28 15:30:12.440114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.730 [2024-10-28 15:30:12.440128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.730 [2024-10-28 15:30:12.440140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.730 [2024-10-28 15:30:12.440171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.730 qpair failed and we were unable to recover it. 00:34:25.730 [2024-10-28 15:30:12.450071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.730 [2024-10-28 15:30:12.450163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.730 [2024-10-28 15:30:12.450187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.730 [2024-10-28 15:30:12.450201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.730 [2024-10-28 15:30:12.450214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.730 [2024-10-28 15:30:12.450244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.730 qpair failed and we were unable to recover it. 00:34:25.730 [2024-10-28 15:30:12.460078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.730 [2024-10-28 15:30:12.460183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.730 [2024-10-28 15:30:12.460207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.730 [2024-10-28 15:30:12.460221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.730 [2024-10-28 15:30:12.460234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.730 [2024-10-28 15:30:12.460266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.730 qpair failed and we were unable to recover it. 00:34:25.730 [2024-10-28 15:30:12.470060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.730 [2024-10-28 15:30:12.470142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.730 [2024-10-28 15:30:12.470169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.730 [2024-10-28 15:30:12.470183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.730 [2024-10-28 15:30:12.470196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.730 [2024-10-28 15:30:12.470227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.730 qpair failed and we were unable to recover it. 00:34:25.730 [2024-10-28 15:30:12.480075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.730 [2024-10-28 15:30:12.480172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.730 [2024-10-28 15:30:12.480199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.730 [2024-10-28 15:30:12.480213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.730 [2024-10-28 15:30:12.480226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.730 [2024-10-28 15:30:12.480256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.730 qpair failed and we were unable to recover it. 00:34:25.730 [2024-10-28 15:30:12.490159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.730 [2024-10-28 15:30:12.490291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.730 [2024-10-28 15:30:12.490317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.730 [2024-10-28 15:30:12.490333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.730 [2024-10-28 15:30:12.490346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.730 [2024-10-28 15:30:12.490378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.730 qpair failed and we were unable to recover it. 00:34:25.730 [2024-10-28 15:30:12.500134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.730 [2024-10-28 15:30:12.500229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.730 [2024-10-28 15:30:12.500257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.730 [2024-10-28 15:30:12.500272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.730 [2024-10-28 15:30:12.500285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.730 [2024-10-28 15:30:12.500317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.730 qpair failed and we were unable to recover it. 00:34:25.730 [2024-10-28 15:30:12.510263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.730 [2024-10-28 15:30:12.510375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.730 [2024-10-28 15:30:12.510402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.730 [2024-10-28 15:30:12.510417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.730 [2024-10-28 15:30:12.510430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.730 [2024-10-28 15:30:12.510460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.730 qpair failed and we were unable to recover it. 00:34:25.730 [2024-10-28 15:30:12.520237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.730 [2024-10-28 15:30:12.520352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.730 [2024-10-28 15:30:12.520384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.730 [2024-10-28 15:30:12.520400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.730 [2024-10-28 15:30:12.520413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.730 [2024-10-28 15:30:12.520444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.730 qpair failed and we were unable to recover it. 00:34:25.730 [2024-10-28 15:30:12.530276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.730 [2024-10-28 15:30:12.530364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.730 [2024-10-28 15:30:12.530390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.730 [2024-10-28 15:30:12.530405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.730 [2024-10-28 15:30:12.530418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.730 [2024-10-28 15:30:12.530448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.730 qpair failed and we were unable to recover it. 00:34:25.730 [2024-10-28 15:30:12.540255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.730 [2024-10-28 15:30:12.540342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.730 [2024-10-28 15:30:12.540368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.730 [2024-10-28 15:30:12.540383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.730 [2024-10-28 15:30:12.540394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.730 [2024-10-28 15:30:12.540425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.730 qpair failed and we were unable to recover it. 00:34:25.730 [2024-10-28 15:30:12.550301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.730 [2024-10-28 15:30:12.550389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.730 [2024-10-28 15:30:12.550415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.730 [2024-10-28 15:30:12.550430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.730 [2024-10-28 15:30:12.550442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.730 [2024-10-28 15:30:12.550472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.730 qpair failed and we were unable to recover it. 00:34:25.730 [2024-10-28 15:30:12.560350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.731 [2024-10-28 15:30:12.560436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.731 [2024-10-28 15:30:12.560461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.731 [2024-10-28 15:30:12.560480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.731 [2024-10-28 15:30:12.560494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.731 [2024-10-28 15:30:12.560524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.731 qpair failed and we were unable to recover it. 00:34:25.731 [2024-10-28 15:30:12.570323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.731 [2024-10-28 15:30:12.570412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.731 [2024-10-28 15:30:12.570438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.731 [2024-10-28 15:30:12.570453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.731 [2024-10-28 15:30:12.570466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.731 [2024-10-28 15:30:12.570497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.731 qpair failed and we were unable to recover it. 00:34:25.731 [2024-10-28 15:30:12.580346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.731 [2024-10-28 15:30:12.580434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.731 [2024-10-28 15:30:12.580459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.731 [2024-10-28 15:30:12.580474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.731 [2024-10-28 15:30:12.580487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.731 [2024-10-28 15:30:12.580518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.731 qpair failed and we were unable to recover it. 00:34:25.731 [2024-10-28 15:30:12.590383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.731 [2024-10-28 15:30:12.590506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.731 [2024-10-28 15:30:12.590535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.731 [2024-10-28 15:30:12.590551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.731 [2024-10-28 15:30:12.590565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.731 [2024-10-28 15:30:12.590597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.731 qpair failed and we were unable to recover it. 00:34:25.990 [2024-10-28 15:30:12.600456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.990 [2024-10-28 15:30:12.600557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.990 [2024-10-28 15:30:12.600584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.990 [2024-10-28 15:30:12.600600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.990 [2024-10-28 15:30:12.600613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.990 [2024-10-28 15:30:12.600667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.990 qpair failed and we were unable to recover it. 00:34:25.990 [2024-10-28 15:30:12.610531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.990 [2024-10-28 15:30:12.610625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.990 [2024-10-28 15:30:12.610675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.991 [2024-10-28 15:30:12.610691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.991 [2024-10-28 15:30:12.610705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.991 [2024-10-28 15:30:12.610738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.991 qpair failed and we were unable to recover it. 00:34:25.991 [2024-10-28 15:30:12.620569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.991 [2024-10-28 15:30:12.620696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.991 [2024-10-28 15:30:12.620721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.991 [2024-10-28 15:30:12.620738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.991 [2024-10-28 15:30:12.620752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.991 [2024-10-28 15:30:12.620784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.991 qpair failed and we were unable to recover it. 00:34:25.991 [2024-10-28 15:30:12.630488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.991 [2024-10-28 15:30:12.630579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.991 [2024-10-28 15:30:12.630604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.991 [2024-10-28 15:30:12.630619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.991 [2024-10-28 15:30:12.630647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.991 [2024-10-28 15:30:12.630687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.991 qpair failed and we were unable to recover it. 00:34:25.991 [2024-10-28 15:30:12.640545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.991 [2024-10-28 15:30:12.640640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.991 [2024-10-28 15:30:12.640678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.991 [2024-10-28 15:30:12.640695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.991 [2024-10-28 15:30:12.640708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.991 [2024-10-28 15:30:12.640742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.991 qpair failed and we were unable to recover it. 00:34:25.991 [2024-10-28 15:30:12.650605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.991 [2024-10-28 15:30:12.650729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.991 [2024-10-28 15:30:12.650755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.991 [2024-10-28 15:30:12.650771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.991 [2024-10-28 15:30:12.650784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.991 [2024-10-28 15:30:12.650815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.991 qpair failed and we were unable to recover it. 00:34:25.991 [2024-10-28 15:30:12.660614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.991 [2024-10-28 15:30:12.660726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.991 [2024-10-28 15:30:12.660751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.991 [2024-10-28 15:30:12.660766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.991 [2024-10-28 15:30:12.660779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.991 [2024-10-28 15:30:12.660811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.991 qpair failed and we were unable to recover it. 00:34:25.991 [2024-10-28 15:30:12.670621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.991 [2024-10-28 15:30:12.670759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.991 [2024-10-28 15:30:12.670786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.991 [2024-10-28 15:30:12.670802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.991 [2024-10-28 15:30:12.670815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.991 [2024-10-28 15:30:12.670847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.991 qpair failed and we were unable to recover it. 00:34:25.991 [2024-10-28 15:30:12.680672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.991 [2024-10-28 15:30:12.680764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.991 [2024-10-28 15:30:12.680791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.991 [2024-10-28 15:30:12.680807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.991 [2024-10-28 15:30:12.680820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.991 [2024-10-28 15:30:12.680854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.991 qpair failed and we were unable to recover it. 00:34:25.991 [2024-10-28 15:30:12.690723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.991 [2024-10-28 15:30:12.690860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.991 [2024-10-28 15:30:12.690887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.991 [2024-10-28 15:30:12.690910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.991 [2024-10-28 15:30:12.690923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.991 [2024-10-28 15:30:12.690955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.991 qpair failed and we were unable to recover it. 00:34:25.991 [2024-10-28 15:30:12.700747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.991 [2024-10-28 15:30:12.700837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.991 [2024-10-28 15:30:12.700864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.991 [2024-10-28 15:30:12.700879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.991 [2024-10-28 15:30:12.700892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.991 [2024-10-28 15:30:12.700923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.991 qpair failed and we were unable to recover it. 00:34:25.991 [2024-10-28 15:30:12.710788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.991 [2024-10-28 15:30:12.710879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.991 [2024-10-28 15:30:12.710904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.991 [2024-10-28 15:30:12.710919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.991 [2024-10-28 15:30:12.710932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.991 [2024-10-28 15:30:12.710979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.991 qpair failed and we were unable to recover it. 00:34:25.991 [2024-10-28 15:30:12.720859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.991 [2024-10-28 15:30:12.720955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.991 [2024-10-28 15:30:12.720981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.991 [2024-10-28 15:30:12.721012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.991 [2024-10-28 15:30:12.721026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.991 [2024-10-28 15:30:12.721056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.991 qpair failed and we were unable to recover it. 00:34:25.991 [2024-10-28 15:30:12.730835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.991 [2024-10-28 15:30:12.730928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.991 [2024-10-28 15:30:12.730955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.991 [2024-10-28 15:30:12.730985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.991 [2024-10-28 15:30:12.731001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.991 [2024-10-28 15:30:12.731039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.991 qpair failed and we were unable to recover it. 00:34:25.991 [2024-10-28 15:30:12.740934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.991 [2024-10-28 15:30:12.741048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.991 [2024-10-28 15:30:12.741072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.991 [2024-10-28 15:30:12.741088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.991 [2024-10-28 15:30:12.741101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.991 [2024-10-28 15:30:12.741132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.991 qpair failed and we were unable to recover it. 00:34:25.992 [2024-10-28 15:30:12.750845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.992 [2024-10-28 15:30:12.750955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.992 [2024-10-28 15:30:12.750981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.992 [2024-10-28 15:30:12.750996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.992 [2024-10-28 15:30:12.751008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.992 [2024-10-28 15:30:12.751039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.992 qpair failed and we were unable to recover it. 00:34:25.992 [2024-10-28 15:30:12.760907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.992 [2024-10-28 15:30:12.761015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.992 [2024-10-28 15:30:12.761039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.992 [2024-10-28 15:30:12.761057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.992 [2024-10-28 15:30:12.761069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.992 [2024-10-28 15:30:12.761100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.992 qpair failed and we were unable to recover it. 00:34:25.992 [2024-10-28 15:30:12.770931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.992 [2024-10-28 15:30:12.771055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.992 [2024-10-28 15:30:12.771080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.992 [2024-10-28 15:30:12.771094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.992 [2024-10-28 15:30:12.771107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.992 [2024-10-28 15:30:12.771137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.992 qpair failed and we were unable to recover it. 00:34:25.992 [2024-10-28 15:30:12.780986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.992 [2024-10-28 15:30:12.781086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.992 [2024-10-28 15:30:12.781113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.992 [2024-10-28 15:30:12.781128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.992 [2024-10-28 15:30:12.781140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.992 [2024-10-28 15:30:12.781170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.992 qpair failed and we were unable to recover it. 00:34:25.992 [2024-10-28 15:30:12.791008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.992 [2024-10-28 15:30:12.791100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.992 [2024-10-28 15:30:12.791126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.992 [2024-10-28 15:30:12.791140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.992 [2024-10-28 15:30:12.791153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.992 [2024-10-28 15:30:12.791184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.992 qpair failed and we were unable to recover it. 00:34:25.992 [2024-10-28 15:30:12.801020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.992 [2024-10-28 15:30:12.801105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.992 [2024-10-28 15:30:12.801131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.992 [2024-10-28 15:30:12.801145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.992 [2024-10-28 15:30:12.801157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.992 [2024-10-28 15:30:12.801188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.992 qpair failed and we were unable to recover it. 00:34:25.992 [2024-10-28 15:30:12.811065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.992 [2024-10-28 15:30:12.811153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.992 [2024-10-28 15:30:12.811178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.992 [2024-10-28 15:30:12.811193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.992 [2024-10-28 15:30:12.811206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.992 [2024-10-28 15:30:12.811237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.992 qpair failed and we were unable to recover it. 00:34:25.992 [2024-10-28 15:30:12.821055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.992 [2024-10-28 15:30:12.821144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.992 [2024-10-28 15:30:12.821174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.992 [2024-10-28 15:30:12.821190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.992 [2024-10-28 15:30:12.821202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.992 [2024-10-28 15:30:12.821234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.992 qpair failed and we were unable to recover it. 00:34:25.992 [2024-10-28 15:30:12.831094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.992 [2024-10-28 15:30:12.831181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.992 [2024-10-28 15:30:12.831206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.992 [2024-10-28 15:30:12.831220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.992 [2024-10-28 15:30:12.831234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.992 [2024-10-28 15:30:12.831264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.992 qpair failed and we were unable to recover it. 00:34:25.992 [2024-10-28 15:30:12.841178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.992 [2024-10-28 15:30:12.841261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.992 [2024-10-28 15:30:12.841285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.992 [2024-10-28 15:30:12.841299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.992 [2024-10-28 15:30:12.841312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.992 [2024-10-28 15:30:12.841343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.992 qpair failed and we were unable to recover it. 00:34:25.992 [2024-10-28 15:30:12.851177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.992 [2024-10-28 15:30:12.851275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.992 [2024-10-28 15:30:12.851300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.992 [2024-10-28 15:30:12.851315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.992 [2024-10-28 15:30:12.851328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:25.992 [2024-10-28 15:30:12.851360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.992 qpair failed and we were unable to recover it. 00:34:26.251 [2024-10-28 15:30:12.861196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.251 [2024-10-28 15:30:12.861304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.251 [2024-10-28 15:30:12.861334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.251 [2024-10-28 15:30:12.861351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.251 [2024-10-28 15:30:12.861371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.251 [2024-10-28 15:30:12.861406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-10-28 15:30:12.871226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.251 [2024-10-28 15:30:12.871322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.251 [2024-10-28 15:30:12.871347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.251 [2024-10-28 15:30:12.871361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.251 [2024-10-28 15:30:12.871377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.251 [2024-10-28 15:30:12.871407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-10-28 15:30:12.881252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.251 [2024-10-28 15:30:12.881358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.251 [2024-10-28 15:30:12.881385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.251 [2024-10-28 15:30:12.881399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.251 [2024-10-28 15:30:12.881412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.251 [2024-10-28 15:30:12.881443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-10-28 15:30:12.891372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.251 [2024-10-28 15:30:12.891482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.251 [2024-10-28 15:30:12.891508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.251 [2024-10-28 15:30:12.891523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.251 [2024-10-28 15:30:12.891536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.251 [2024-10-28 15:30:12.891567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-10-28 15:30:12.901348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.251 [2024-10-28 15:30:12.901442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.251 [2024-10-28 15:30:12.901467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.251 [2024-10-28 15:30:12.901481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.251 [2024-10-28 15:30:12.901495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.251 [2024-10-28 15:30:12.901525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-10-28 15:30:12.911351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.251 [2024-10-28 15:30:12.911438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.251 [2024-10-28 15:30:12.911465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.251 [2024-10-28 15:30:12.911479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.252 [2024-10-28 15:30:12.911492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.252 [2024-10-28 15:30:12.911522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-10-28 15:30:12.921440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.252 [2024-10-28 15:30:12.921524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.252 [2024-10-28 15:30:12.921549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.252 [2024-10-28 15:30:12.921563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.252 [2024-10-28 15:30:12.921575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.252 [2024-10-28 15:30:12.921606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-10-28 15:30:12.931357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.252 [2024-10-28 15:30:12.931454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.252 [2024-10-28 15:30:12.931480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.252 [2024-10-28 15:30:12.931495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.252 [2024-10-28 15:30:12.931507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.252 [2024-10-28 15:30:12.931538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-10-28 15:30:12.941481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.252 [2024-10-28 15:30:12.941567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.252 [2024-10-28 15:30:12.941593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.252 [2024-10-28 15:30:12.941607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.252 [2024-10-28 15:30:12.941620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.252 [2024-10-28 15:30:12.941673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-10-28 15:30:12.951440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.252 [2024-10-28 15:30:12.951556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.252 [2024-10-28 15:30:12.951591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.252 [2024-10-28 15:30:12.951608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.252 [2024-10-28 15:30:12.951620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.252 [2024-10-28 15:30:12.951675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-10-28 15:30:12.961488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.252 [2024-10-28 15:30:12.961575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.252 [2024-10-28 15:30:12.961601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.252 [2024-10-28 15:30:12.961615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.252 [2024-10-28 15:30:12.961628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.252 [2024-10-28 15:30:12.961684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-10-28 15:30:12.971528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.252 [2024-10-28 15:30:12.971620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.252 [2024-10-28 15:30:12.971666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.252 [2024-10-28 15:30:12.971682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.252 [2024-10-28 15:30:12.971695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.252 [2024-10-28 15:30:12.971727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-10-28 15:30:12.981522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.252 [2024-10-28 15:30:12.981610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.252 [2024-10-28 15:30:12.981661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.252 [2024-10-28 15:30:12.981681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.252 [2024-10-28 15:30:12.981694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.252 [2024-10-28 15:30:12.981727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-10-28 15:30:12.991541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.252 [2024-10-28 15:30:12.991625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.252 [2024-10-28 15:30:12.991674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.252 [2024-10-28 15:30:12.991690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.252 [2024-10-28 15:30:12.991710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.252 [2024-10-28 15:30:12.991743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-10-28 15:30:13.001558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.252 [2024-10-28 15:30:13.001671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.252 [2024-10-28 15:30:13.001698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.252 [2024-10-28 15:30:13.001713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.252 [2024-10-28 15:30:13.001726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.252 [2024-10-28 15:30:13.001758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-10-28 15:30:13.011660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.252 [2024-10-28 15:30:13.011795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.252 [2024-10-28 15:30:13.011822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.252 [2024-10-28 15:30:13.011837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.252 [2024-10-28 15:30:13.011850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.252 [2024-10-28 15:30:13.011881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-10-28 15:30:13.021696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.252 [2024-10-28 15:30:13.021792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.252 [2024-10-28 15:30:13.021818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.252 [2024-10-28 15:30:13.021833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.252 [2024-10-28 15:30:13.021846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.252 [2024-10-28 15:30:13.021877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-10-28 15:30:13.031709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.252 [2024-10-28 15:30:13.031805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.252 [2024-10-28 15:30:13.031830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.252 [2024-10-28 15:30:13.031845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.252 [2024-10-28 15:30:13.031858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.252 [2024-10-28 15:30:13.031890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-10-28 15:30:13.041698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.252 [2024-10-28 15:30:13.041788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.252 [2024-10-28 15:30:13.041815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.252 [2024-10-28 15:30:13.041831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.252 [2024-10-28 15:30:13.041845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.252 [2024-10-28 15:30:13.041876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-10-28 15:30:13.051734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.252 [2024-10-28 15:30:13.051841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.253 [2024-10-28 15:30:13.051868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.253 [2024-10-28 15:30:13.051883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.253 [2024-10-28 15:30:13.051896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.253 [2024-10-28 15:30:13.051927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-10-28 15:30:13.061780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.253 [2024-10-28 15:30:13.061874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.253 [2024-10-28 15:30:13.061899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.253 [2024-10-28 15:30:13.061914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.253 [2024-10-28 15:30:13.061927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.253 [2024-10-28 15:30:13.061973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-10-28 15:30:13.071812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.253 [2024-10-28 15:30:13.071896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.253 [2024-10-28 15:30:13.071921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.253 [2024-10-28 15:30:13.071936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.253 [2024-10-28 15:30:13.071950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.253 [2024-10-28 15:30:13.071982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-10-28 15:30:13.081809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.253 [2024-10-28 15:30:13.081898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.253 [2024-10-28 15:30:13.081946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.253 [2024-10-28 15:30:13.081962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.253 [2024-10-28 15:30:13.081975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.253 [2024-10-28 15:30:13.082006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-10-28 15:30:13.091890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.253 [2024-10-28 15:30:13.091997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.253 [2024-10-28 15:30:13.092024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.253 [2024-10-28 15:30:13.092038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.253 [2024-10-28 15:30:13.092050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.253 [2024-10-28 15:30:13.092091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-10-28 15:30:13.101924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.253 [2024-10-28 15:30:13.102032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.253 [2024-10-28 15:30:13.102056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.253 [2024-10-28 15:30:13.102071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.253 [2024-10-28 15:30:13.102084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.253 [2024-10-28 15:30:13.102115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-10-28 15:30:13.111933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.253 [2024-10-28 15:30:13.112042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.253 [2024-10-28 15:30:13.112068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.253 [2024-10-28 15:30:13.112083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.253 [2024-10-28 15:30:13.112096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.253 [2024-10-28 15:30:13.112137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.513 [2024-10-28 15:30:13.121950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.513 [2024-10-28 15:30:13.122041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.513 [2024-10-28 15:30:13.122069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.513 [2024-10-28 15:30:13.122090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.513 [2024-10-28 15:30:13.122105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.513 [2024-10-28 15:30:13.122139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-10-28 15:30:13.132026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.513 [2024-10-28 15:30:13.132119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.513 [2024-10-28 15:30:13.132143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.513 [2024-10-28 15:30:13.132158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.513 [2024-10-28 15:30:13.132171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.513 [2024-10-28 15:30:13.132202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-10-28 15:30:13.142008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.513 [2024-10-28 15:30:13.142103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.513 [2024-10-28 15:30:13.142129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.513 [2024-10-28 15:30:13.142144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.513 [2024-10-28 15:30:13.142157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.513 [2024-10-28 15:30:13.142189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-10-28 15:30:13.152044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.513 [2024-10-28 15:30:13.152130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.513 [2024-10-28 15:30:13.152155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.513 [2024-10-28 15:30:13.152169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.513 [2024-10-28 15:30:13.152181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.513 [2024-10-28 15:30:13.152211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-10-28 15:30:13.162021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.513 [2024-10-28 15:30:13.162103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.513 [2024-10-28 15:30:13.162129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.513 [2024-10-28 15:30:13.162144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.513 [2024-10-28 15:30:13.162156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.513 [2024-10-28 15:30:13.162187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-10-28 15:30:13.172096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.513 [2024-10-28 15:30:13.172199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.513 [2024-10-28 15:30:13.172225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.513 [2024-10-28 15:30:13.172241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.513 [2024-10-28 15:30:13.172254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.513 [2024-10-28 15:30:13.172284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-10-28 15:30:13.182094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.513 [2024-10-28 15:30:13.182179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.513 [2024-10-28 15:30:13.182205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.513 [2024-10-28 15:30:13.182220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.513 [2024-10-28 15:30:13.182233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.513 [2024-10-28 15:30:13.182263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-10-28 15:30:13.192153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.513 [2024-10-28 15:30:13.192238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.513 [2024-10-28 15:30:13.192262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.513 [2024-10-28 15:30:13.192276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.513 [2024-10-28 15:30:13.192289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.513 [2024-10-28 15:30:13.192319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-10-28 15:30:13.202179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.513 [2024-10-28 15:30:13.202264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.513 [2024-10-28 15:30:13.202288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.513 [2024-10-28 15:30:13.202303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.514 [2024-10-28 15:30:13.202315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.514 [2024-10-28 15:30:13.202346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-10-28 15:30:13.212265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.514 [2024-10-28 15:30:13.212365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.514 [2024-10-28 15:30:13.212389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.514 [2024-10-28 15:30:13.212403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.514 [2024-10-28 15:30:13.212416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.514 [2024-10-28 15:30:13.212447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-10-28 15:30:13.222232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.514 [2024-10-28 15:30:13.222336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.514 [2024-10-28 15:30:13.222360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.514 [2024-10-28 15:30:13.222374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.514 [2024-10-28 15:30:13.222387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.514 [2024-10-28 15:30:13.222418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-10-28 15:30:13.232346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.514 [2024-10-28 15:30:13.232476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.514 [2024-10-28 15:30:13.232502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.514 [2024-10-28 15:30:13.232517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.514 [2024-10-28 15:30:13.232530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.514 [2024-10-28 15:30:13.232561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-10-28 15:30:13.242273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.514 [2024-10-28 15:30:13.242361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.514 [2024-10-28 15:30:13.242385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.514 [2024-10-28 15:30:13.242399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.514 [2024-10-28 15:30:13.242412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.514 [2024-10-28 15:30:13.242443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-10-28 15:30:13.252325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.514 [2024-10-28 15:30:13.252416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.514 [2024-10-28 15:30:13.252441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.514 [2024-10-28 15:30:13.252461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.514 [2024-10-28 15:30:13.252477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.514 [2024-10-28 15:30:13.252509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-10-28 15:30:13.262392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.514 [2024-10-28 15:30:13.262479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.514 [2024-10-28 15:30:13.262505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.514 [2024-10-28 15:30:13.262519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.514 [2024-10-28 15:30:13.262532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.514 [2024-10-28 15:30:13.262563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-10-28 15:30:13.272340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.514 [2024-10-28 15:30:13.272428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.514 [2024-10-28 15:30:13.272455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.514 [2024-10-28 15:30:13.272470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.514 [2024-10-28 15:30:13.272483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.514 [2024-10-28 15:30:13.272514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-10-28 15:30:13.282367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.514 [2024-10-28 15:30:13.282454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.514 [2024-10-28 15:30:13.282479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.514 [2024-10-28 15:30:13.282493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.514 [2024-10-28 15:30:13.282505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.514 [2024-10-28 15:30:13.282536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-10-28 15:30:13.292381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.514 [2024-10-28 15:30:13.292504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.514 [2024-10-28 15:30:13.292530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.514 [2024-10-28 15:30:13.292544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.514 [2024-10-28 15:30:13.292557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.514 [2024-10-28 15:30:13.292594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-10-28 15:30:13.302427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.514 [2024-10-28 15:30:13.302514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.514 [2024-10-28 15:30:13.302538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.514 [2024-10-28 15:30:13.302552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.514 [2024-10-28 15:30:13.302564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.514 [2024-10-28 15:30:13.302595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-10-28 15:30:13.312457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.515 [2024-10-28 15:30:13.312544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.515 [2024-10-28 15:30:13.312568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.515 [2024-10-28 15:30:13.312583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.515 [2024-10-28 15:30:13.312595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.515 [2024-10-28 15:30:13.312625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-10-28 15:30:13.322483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.515 [2024-10-28 15:30:13.322569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.515 [2024-10-28 15:30:13.322595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.515 [2024-10-28 15:30:13.322610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.515 [2024-10-28 15:30:13.322622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.515 [2024-10-28 15:30:13.322676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-10-28 15:30:13.332509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.515 [2024-10-28 15:30:13.332599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.515 [2024-10-28 15:30:13.332624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.515 [2024-10-28 15:30:13.332666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.515 [2024-10-28 15:30:13.332683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.515 [2024-10-28 15:30:13.332715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-10-28 15:30:13.342572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.515 [2024-10-28 15:30:13.342693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.515 [2024-10-28 15:30:13.342718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.515 [2024-10-28 15:30:13.342733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.515 [2024-10-28 15:30:13.342747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.515 [2024-10-28 15:30:13.342792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-10-28 15:30:13.352593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.515 [2024-10-28 15:30:13.352707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.515 [2024-10-28 15:30:13.352733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.515 [2024-10-28 15:30:13.352747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.515 [2024-10-28 15:30:13.352761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.515 [2024-10-28 15:30:13.352792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-10-28 15:30:13.362665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.515 [2024-10-28 15:30:13.362806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.515 [2024-10-28 15:30:13.362832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.515 [2024-10-28 15:30:13.362853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.515 [2024-10-28 15:30:13.362867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.515 [2024-10-28 15:30:13.362898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-10-28 15:30:13.372608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.515 [2024-10-28 15:30:13.372725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.515 [2024-10-28 15:30:13.372752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.515 [2024-10-28 15:30:13.372766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.515 [2024-10-28 15:30:13.372779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.515 [2024-10-28 15:30:13.372811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.775 [2024-10-28 15:30:13.382702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.775 [2024-10-28 15:30:13.382800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.775 [2024-10-28 15:30:13.382835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.775 [2024-10-28 15:30:13.382853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.775 [2024-10-28 15:30:13.382867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.775 [2024-10-28 15:30:13.382909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.775 qpair failed and we were unable to recover it. 00:34:26.775 [2024-10-28 15:30:13.392711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.775 [2024-10-28 15:30:13.392800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.775 [2024-10-28 15:30:13.392827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.775 [2024-10-28 15:30:13.392842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.775 [2024-10-28 15:30:13.392856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.775 [2024-10-28 15:30:13.392889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.775 qpair failed and we were unable to recover it. 00:34:26.775 [2024-10-28 15:30:13.402755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.775 [2024-10-28 15:30:13.402842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.775 [2024-10-28 15:30:13.402869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.775 [2024-10-28 15:30:13.402885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.775 [2024-10-28 15:30:13.402898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.775 [2024-10-28 15:30:13.402930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.775 qpair failed and we were unable to recover it. 00:34:26.775 [2024-10-28 15:30:13.412763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.775 [2024-10-28 15:30:13.412863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.775 [2024-10-28 15:30:13.412888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.775 [2024-10-28 15:30:13.412903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.775 [2024-10-28 15:30:13.412915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.775 [2024-10-28 15:30:13.412947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.775 qpair failed and we were unable to recover it. 00:34:26.775 [2024-10-28 15:30:13.422768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.775 [2024-10-28 15:30:13.422859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.775 [2024-10-28 15:30:13.422885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.775 [2024-10-28 15:30:13.422900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.775 [2024-10-28 15:30:13.422919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.775 [2024-10-28 15:30:13.422966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.775 qpair failed and we were unable to recover it. 00:34:26.775 [2024-10-28 15:30:13.432784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.775 [2024-10-28 15:30:13.432913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.775 [2024-10-28 15:30:13.432940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.775 [2024-10-28 15:30:13.432955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.775 [2024-10-28 15:30:13.432968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.775 [2024-10-28 15:30:13.433013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.775 qpair failed and we were unable to recover it. 00:34:26.775 [2024-10-28 15:30:13.442852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.775 [2024-10-28 15:30:13.442975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.775 [2024-10-28 15:30:13.443001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.775 [2024-10-28 15:30:13.443017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.775 [2024-10-28 15:30:13.443029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.775 [2024-10-28 15:30:13.443059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.775 qpair failed and we were unable to recover it. 00:34:26.775 [2024-10-28 15:30:13.452882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.775 [2024-10-28 15:30:13.452989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.775 [2024-10-28 15:30:13.453015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.775 [2024-10-28 15:30:13.453030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.775 [2024-10-28 15:30:13.453042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.776 [2024-10-28 15:30:13.453085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.776 qpair failed and we were unable to recover it. 00:34:26.776 [2024-10-28 15:30:13.462894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.776 [2024-10-28 15:30:13.463006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.776 [2024-10-28 15:30:13.463032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.776 [2024-10-28 15:30:13.463046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.776 [2024-10-28 15:30:13.463059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.776 [2024-10-28 15:30:13.463089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.776 qpair failed and we were unable to recover it. 00:34:26.776 [2024-10-28 15:30:13.472981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.776 [2024-10-28 15:30:13.473071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.776 [2024-10-28 15:30:13.473096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.776 [2024-10-28 15:30:13.473111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.776 [2024-10-28 15:30:13.473124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.776 [2024-10-28 15:30:13.473163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.776 qpair failed and we were unable to recover it. 00:34:26.776 [2024-10-28 15:30:13.482994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.776 [2024-10-28 15:30:13.483106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.776 [2024-10-28 15:30:13.483132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.776 [2024-10-28 15:30:13.483146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.776 [2024-10-28 15:30:13.483158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.776 [2024-10-28 15:30:13.483189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.776 qpair failed and we were unable to recover it. 00:34:26.776 [2024-10-28 15:30:13.493062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.776 [2024-10-28 15:30:13.493165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.776 [2024-10-28 15:30:13.493192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.776 [2024-10-28 15:30:13.493207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.776 [2024-10-28 15:30:13.493219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.776 [2024-10-28 15:30:13.493252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.776 qpair failed and we were unable to recover it. 00:34:26.776 [2024-10-28 15:30:13.503036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.776 [2024-10-28 15:30:13.503130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.776 [2024-10-28 15:30:13.503157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.776 [2024-10-28 15:30:13.503173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.776 [2024-10-28 15:30:13.503187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.776 [2024-10-28 15:30:13.503219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.776 qpair failed and we were unable to recover it. 00:34:26.776 [2024-10-28 15:30:13.513104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.776 [2024-10-28 15:30:13.513200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.776 [2024-10-28 15:30:13.513229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.776 [2024-10-28 15:30:13.513244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.776 [2024-10-28 15:30:13.513257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.776 [2024-10-28 15:30:13.513287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.776 qpair failed and we were unable to recover it. 00:34:26.776 [2024-10-28 15:30:13.523092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.776 [2024-10-28 15:30:13.523184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.776 [2024-10-28 15:30:13.523210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.776 [2024-10-28 15:30:13.523224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.776 [2024-10-28 15:30:13.523237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.776 [2024-10-28 15:30:13.523266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.776 qpair failed and we were unable to recover it. 00:34:26.776 [2024-10-28 15:30:13.533116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.776 [2024-10-28 15:30:13.533203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.776 [2024-10-28 15:30:13.533230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.776 [2024-10-28 15:30:13.533245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.776 [2024-10-28 15:30:13.533258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.776 [2024-10-28 15:30:13.533290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.776 qpair failed and we were unable to recover it. 00:34:26.776 [2024-10-28 15:30:13.543120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.776 [2024-10-28 15:30:13.543205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.776 [2024-10-28 15:30:13.543232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.776 [2024-10-28 15:30:13.543247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.776 [2024-10-28 15:30:13.543260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.776 [2024-10-28 15:30:13.543290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.776 qpair failed and we were unable to recover it. 00:34:26.776 [2024-10-28 15:30:13.553160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.776 [2024-10-28 15:30:13.553246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.776 [2024-10-28 15:30:13.553272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.776 [2024-10-28 15:30:13.553287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.776 [2024-10-28 15:30:13.553305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.776 [2024-10-28 15:30:13.553336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.776 qpair failed and we were unable to recover it. 00:34:26.776 [2024-10-28 15:30:13.563149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.776 [2024-10-28 15:30:13.563231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.776 [2024-10-28 15:30:13.563257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.776 [2024-10-28 15:30:13.563271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.776 [2024-10-28 15:30:13.563285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.776 [2024-10-28 15:30:13.563316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.776 qpair failed and we were unable to recover it. 00:34:26.776 [2024-10-28 15:30:13.573294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.777 [2024-10-28 15:30:13.573392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.777 [2024-10-28 15:30:13.573416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.777 [2024-10-28 15:30:13.573430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.777 [2024-10-28 15:30:13.573443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.777 [2024-10-28 15:30:13.573475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.777 qpair failed and we were unable to recover it. 00:34:26.777 [2024-10-28 15:30:13.583272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.777 [2024-10-28 15:30:13.583369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.777 [2024-10-28 15:30:13.583394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.777 [2024-10-28 15:30:13.583408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.777 [2024-10-28 15:30:13.583421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.777 [2024-10-28 15:30:13.583452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.777 qpair failed and we were unable to recover it. 00:34:26.777 [2024-10-28 15:30:13.593247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.777 [2024-10-28 15:30:13.593336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.777 [2024-10-28 15:30:13.593363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.777 [2024-10-28 15:30:13.593378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.777 [2024-10-28 15:30:13.593390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.777 [2024-10-28 15:30:13.593421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.777 qpair failed and we were unable to recover it. 00:34:26.777 [2024-10-28 15:30:13.603315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.777 [2024-10-28 15:30:13.603406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.777 [2024-10-28 15:30:13.603432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.777 [2024-10-28 15:30:13.603446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.777 [2024-10-28 15:30:13.603458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.777 [2024-10-28 15:30:13.603490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.777 qpair failed and we were unable to recover it. 00:34:26.777 [2024-10-28 15:30:13.613359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.777 [2024-10-28 15:30:13.613448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.777 [2024-10-28 15:30:13.613472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.777 [2024-10-28 15:30:13.613487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.777 [2024-10-28 15:30:13.613499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.777 [2024-10-28 15:30:13.613529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.777 qpair failed and we were unable to recover it. 00:34:26.777 [2024-10-28 15:30:13.623342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.777 [2024-10-28 15:30:13.623430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.777 [2024-10-28 15:30:13.623454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.777 [2024-10-28 15:30:13.623468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.777 [2024-10-28 15:30:13.623480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.777 [2024-10-28 15:30:13.623511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.777 qpair failed and we were unable to recover it. 00:34:26.777 [2024-10-28 15:30:13.633402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.777 [2024-10-28 15:30:13.633524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.777 [2024-10-28 15:30:13.633551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.777 [2024-10-28 15:30:13.633565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.777 [2024-10-28 15:30:13.633578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:26.777 [2024-10-28 15:30:13.633609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.777 qpair failed and we were unable to recover it. 00:34:27.038 [2024-10-28 15:30:13.643434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.038 [2024-10-28 15:30:13.643562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.038 [2024-10-28 15:30:13.643599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.038 [2024-10-28 15:30:13.643615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.038 [2024-10-28 15:30:13.643643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.038 [2024-10-28 15:30:13.643696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.038 qpair failed and we were unable to recover it. 00:34:27.038 [2024-10-28 15:30:13.653524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.038 [2024-10-28 15:30:13.653617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.038 [2024-10-28 15:30:13.653667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.038 [2024-10-28 15:30:13.653685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.038 [2024-10-28 15:30:13.653701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.038 [2024-10-28 15:30:13.653733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.038 qpair failed and we were unable to recover it. 00:34:27.038 [2024-10-28 15:30:13.663445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.038 [2024-10-28 15:30:13.663534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.038 [2024-10-28 15:30:13.663561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.038 [2024-10-28 15:30:13.663576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.038 [2024-10-28 15:30:13.663588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.038 [2024-10-28 15:30:13.663618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.038 qpair failed and we were unable to recover it. 00:34:27.038 [2024-10-28 15:30:13.673474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.038 [2024-10-28 15:30:13.673557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.038 [2024-10-28 15:30:13.673584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.038 [2024-10-28 15:30:13.673599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.038 [2024-10-28 15:30:13.673612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.038 [2024-10-28 15:30:13.673665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.038 qpair failed and we were unable to recover it. 00:34:27.038 [2024-10-28 15:30:13.683524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.038 [2024-10-28 15:30:13.683609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.038 [2024-10-28 15:30:13.683648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.038 [2024-10-28 15:30:13.683682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.038 [2024-10-28 15:30:13.683697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.038 [2024-10-28 15:30:13.683729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.038 qpair failed and we were unable to recover it. 00:34:27.038 [2024-10-28 15:30:13.693597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.038 [2024-10-28 15:30:13.693723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.038 [2024-10-28 15:30:13.693751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.038 [2024-10-28 15:30:13.693767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.038 [2024-10-28 15:30:13.693780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.038 [2024-10-28 15:30:13.693811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.038 qpair failed and we were unable to recover it. 00:34:27.038 [2024-10-28 15:30:13.703607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.038 [2024-10-28 15:30:13.703721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.038 [2024-10-28 15:30:13.703747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.038 [2024-10-28 15:30:13.703762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.038 [2024-10-28 15:30:13.703775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.038 [2024-10-28 15:30:13.703807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.038 qpair failed and we were unable to recover it. 00:34:27.038 [2024-10-28 15:30:13.713623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.038 [2024-10-28 15:30:13.713790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.038 [2024-10-28 15:30:13.713817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.038 [2024-10-28 15:30:13.713832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.038 [2024-10-28 15:30:13.713844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.038 [2024-10-28 15:30:13.713875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.038 qpair failed and we were unable to recover it. 00:34:27.038 [2024-10-28 15:30:13.723706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.038 [2024-10-28 15:30:13.723797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.038 [2024-10-28 15:30:13.723825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.038 [2024-10-28 15:30:13.723840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.038 [2024-10-28 15:30:13.723854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.038 [2024-10-28 15:30:13.723885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.038 qpair failed and we were unable to recover it. 00:34:27.038 [2024-10-28 15:30:13.733717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.038 [2024-10-28 15:30:13.733814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.038 [2024-10-28 15:30:13.733840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.038 [2024-10-28 15:30:13.733854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.038 [2024-10-28 15:30:13.733867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.038 [2024-10-28 15:30:13.733898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.038 qpair failed and we were unable to recover it. 00:34:27.038 [2024-10-28 15:30:13.743732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.038 [2024-10-28 15:30:13.743823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.038 [2024-10-28 15:30:13.743850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.038 [2024-10-28 15:30:13.743865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.038 [2024-10-28 15:30:13.743878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.038 [2024-10-28 15:30:13.743910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.038 qpair failed and we were unable to recover it. 00:34:27.038 [2024-10-28 15:30:13.753753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.038 [2024-10-28 15:30:13.753848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.039 [2024-10-28 15:30:13.753874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.039 [2024-10-28 15:30:13.753889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.039 [2024-10-28 15:30:13.753902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.039 [2024-10-28 15:30:13.753948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.039 qpair failed and we were unable to recover it. 00:34:27.039 [2024-10-28 15:30:13.763766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.039 [2024-10-28 15:30:13.763854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.039 [2024-10-28 15:30:13.763879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.039 [2024-10-28 15:30:13.763893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.039 [2024-10-28 15:30:13.763906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.039 [2024-10-28 15:30:13.763952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.039 qpair failed and we were unable to recover it. 00:34:27.039 [2024-10-28 15:30:13.773811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.039 [2024-10-28 15:30:13.773911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.039 [2024-10-28 15:30:13.773939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.039 [2024-10-28 15:30:13.773969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.039 [2024-10-28 15:30:13.773982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.039 [2024-10-28 15:30:13.774014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.039 qpair failed and we were unable to recover it. 00:34:27.039 [2024-10-28 15:30:13.783868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.039 [2024-10-28 15:30:13.783970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.039 [2024-10-28 15:30:13.783994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.039 [2024-10-28 15:30:13.784008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.039 [2024-10-28 15:30:13.784021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.039 [2024-10-28 15:30:13.784052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.039 qpair failed and we were unable to recover it. 00:34:27.039 [2024-10-28 15:30:13.793886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.039 [2024-10-28 15:30:13.794019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.039 [2024-10-28 15:30:13.794046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.039 [2024-10-28 15:30:13.794061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.039 [2024-10-28 15:30:13.794074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.039 [2024-10-28 15:30:13.794104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.039 qpair failed and we were unable to recover it. 00:34:27.039 [2024-10-28 15:30:13.803888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.039 [2024-10-28 15:30:13.803992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.039 [2024-10-28 15:30:13.804017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.039 [2024-10-28 15:30:13.804032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.039 [2024-10-28 15:30:13.804045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.039 [2024-10-28 15:30:13.804075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.039 qpair failed and we were unable to recover it. 00:34:27.039 [2024-10-28 15:30:13.813964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.039 [2024-10-28 15:30:13.814071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.039 [2024-10-28 15:30:13.814097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.039 [2024-10-28 15:30:13.814118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.039 [2024-10-28 15:30:13.814131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.039 [2024-10-28 15:30:13.814171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.039 qpair failed and we were unable to recover it. 00:34:27.039 [2024-10-28 15:30:13.823964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.039 [2024-10-28 15:30:13.824070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.039 [2024-10-28 15:30:13.824094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.039 [2024-10-28 15:30:13.824108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.039 [2024-10-28 15:30:13.824120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.039 [2024-10-28 15:30:13.824150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.039 qpair failed and we were unable to recover it. 00:34:27.039 [2024-10-28 15:30:13.833988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.039 [2024-10-28 15:30:13.834076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.039 [2024-10-28 15:30:13.834101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.039 [2024-10-28 15:30:13.834116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.039 [2024-10-28 15:30:13.834129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.039 [2024-10-28 15:30:13.834160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.039 qpair failed and we were unable to recover it. 00:34:27.039 [2024-10-28 15:30:13.844031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.039 [2024-10-28 15:30:13.844156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.039 [2024-10-28 15:30:13.844183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.039 [2024-10-28 15:30:13.844198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.039 [2024-10-28 15:30:13.844211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.039 [2024-10-28 15:30:13.844241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.039 qpair failed and we were unable to recover it. 00:34:27.039 [2024-10-28 15:30:13.854118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.039 [2024-10-28 15:30:13.854241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.039 [2024-10-28 15:30:13.854266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.039 [2024-10-28 15:30:13.854281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.039 [2024-10-28 15:30:13.854294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.039 [2024-10-28 15:30:13.854339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.039 qpair failed and we were unable to recover it. 00:34:27.039 [2024-10-28 15:30:13.864122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.039 [2024-10-28 15:30:13.864234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.039 [2024-10-28 15:30:13.864260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.039 [2024-10-28 15:30:13.864274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.039 [2024-10-28 15:30:13.864287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.039 [2024-10-28 15:30:13.864317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.039 qpair failed and we were unable to recover it. 00:34:27.039 [2024-10-28 15:30:13.874135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.039 [2024-10-28 15:30:13.874234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.039 [2024-10-28 15:30:13.874259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.039 [2024-10-28 15:30:13.874273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.039 [2024-10-28 15:30:13.874285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.039 [2024-10-28 15:30:13.874315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.039 qpair failed and we were unable to recover it. 00:34:27.039 [2024-10-28 15:30:13.884146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.039 [2024-10-28 15:30:13.884239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.039 [2024-10-28 15:30:13.884263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.039 [2024-10-28 15:30:13.884277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.039 [2024-10-28 15:30:13.884290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.039 [2024-10-28 15:30:13.884320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.039 qpair failed and we were unable to recover it. 00:34:27.039 [2024-10-28 15:30:13.894204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.040 [2024-10-28 15:30:13.894295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.040 [2024-10-28 15:30:13.894321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.040 [2024-10-28 15:30:13.894336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.040 [2024-10-28 15:30:13.894348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.040 [2024-10-28 15:30:13.894378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.040 qpair failed and we were unable to recover it. 00:34:27.300 [2024-10-28 15:30:13.904201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.300 [2024-10-28 15:30:13.904291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.300 [2024-10-28 15:30:13.904319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.300 [2024-10-28 15:30:13.904334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.300 [2024-10-28 15:30:13.904348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.300 [2024-10-28 15:30:13.904380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.300 qpair failed and we were unable to recover it. 00:34:27.300 [2024-10-28 15:30:13.914249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.300 [2024-10-28 15:30:13.914381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.300 [2024-10-28 15:30:13.914410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.300 [2024-10-28 15:30:13.914426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.300 [2024-10-28 15:30:13.914440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.300 [2024-10-28 15:30:13.914471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.300 qpair failed and we were unable to recover it. 00:34:27.300 [2024-10-28 15:30:13.924279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.300 [2024-10-28 15:30:13.924407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.300 [2024-10-28 15:30:13.924436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.300 [2024-10-28 15:30:13.924452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.300 [2024-10-28 15:30:13.924466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.300 [2024-10-28 15:30:13.924497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.300 qpair failed and we were unable to recover it. 00:34:27.300 [2024-10-28 15:30:13.934281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.300 [2024-10-28 15:30:13.934375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.300 [2024-10-28 15:30:13.934399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.300 [2024-10-28 15:30:13.934413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.300 [2024-10-28 15:30:13.934425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.300 [2024-10-28 15:30:13.934456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.300 qpair failed and we were unable to recover it. 00:34:27.300 [2024-10-28 15:30:13.944340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.300 [2024-10-28 15:30:13.944472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.300 [2024-10-28 15:30:13.944505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.300 [2024-10-28 15:30:13.944521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.300 [2024-10-28 15:30:13.944534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.300 [2024-10-28 15:30:13.944564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.300 qpair failed and we were unable to recover it. 00:34:27.300 [2024-10-28 15:30:13.954325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.300 [2024-10-28 15:30:13.954407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.300 [2024-10-28 15:30:13.954434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.300 [2024-10-28 15:30:13.954448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.300 [2024-10-28 15:30:13.954460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.300 [2024-10-28 15:30:13.954491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.300 qpair failed and we were unable to recover it. 00:34:27.300 [2024-10-28 15:30:13.964461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.300 [2024-10-28 15:30:13.964549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.300 [2024-10-28 15:30:13.964573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.300 [2024-10-28 15:30:13.964588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.300 [2024-10-28 15:30:13.964600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.300 [2024-10-28 15:30:13.964645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.300 qpair failed and we were unable to recover it. 00:34:27.300 [2024-10-28 15:30:13.974382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.300 [2024-10-28 15:30:13.974477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.300 [2024-10-28 15:30:13.974502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.300 [2024-10-28 15:30:13.974516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.300 [2024-10-28 15:30:13.974529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.300 [2024-10-28 15:30:13.974561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.300 qpair failed and we were unable to recover it. 00:34:27.300 [2024-10-28 15:30:13.984392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.300 [2024-10-28 15:30:13.984494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.300 [2024-10-28 15:30:13.984518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.300 [2024-10-28 15:30:13.984533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.300 [2024-10-28 15:30:13.984551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.300 [2024-10-28 15:30:13.984582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.300 qpair failed and we were unable to recover it. 00:34:27.300 [2024-10-28 15:30:13.994426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.300 [2024-10-28 15:30:13.994513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.300 [2024-10-28 15:30:13.994538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.300 [2024-10-28 15:30:13.994551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.300 [2024-10-28 15:30:13.994564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.300 [2024-10-28 15:30:13.994595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.300 qpair failed and we were unable to recover it. 00:34:27.300 [2024-10-28 15:30:14.004444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.300 [2024-10-28 15:30:14.004527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.300 [2024-10-28 15:30:14.004555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.300 [2024-10-28 15:30:14.004570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.300 [2024-10-28 15:30:14.004584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.300 [2024-10-28 15:30:14.004616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.300 qpair failed and we were unable to recover it. 00:34:27.300 [2024-10-28 15:30:14.014489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.300 [2024-10-28 15:30:14.014593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.300 [2024-10-28 15:30:14.014619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.300 [2024-10-28 15:30:14.014655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.300 [2024-10-28 15:30:14.014671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.300 [2024-10-28 15:30:14.014715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.300 qpair failed and we were unable to recover it. 00:34:27.300 [2024-10-28 15:30:14.024532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.300 [2024-10-28 15:30:14.024684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.300 [2024-10-28 15:30:14.024718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.300 [2024-10-28 15:30:14.024734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.300 [2024-10-28 15:30:14.024747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.301 [2024-10-28 15:30:14.024780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.301 qpair failed and we were unable to recover it. 00:34:27.301 [2024-10-28 15:30:14.034563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.301 [2024-10-28 15:30:14.034670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.301 [2024-10-28 15:30:14.034705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.301 [2024-10-28 15:30:14.034721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.301 [2024-10-28 15:30:14.034734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.301 [2024-10-28 15:30:14.034765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.301 qpair failed and we were unable to recover it. 00:34:27.301 [2024-10-28 15:30:14.044610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.301 [2024-10-28 15:30:14.044734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.301 [2024-10-28 15:30:14.044761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.301 [2024-10-28 15:30:14.044776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.301 [2024-10-28 15:30:14.044789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.301 [2024-10-28 15:30:14.044821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.301 qpair failed and we were unable to recover it. 00:34:27.301 [2024-10-28 15:30:14.054594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.301 [2024-10-28 15:30:14.054724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.301 [2024-10-28 15:30:14.054751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.301 [2024-10-28 15:30:14.054767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.301 [2024-10-28 15:30:14.054779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.301 [2024-10-28 15:30:14.054812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.301 qpair failed and we were unable to recover it. 00:34:27.301 [2024-10-28 15:30:14.064624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.301 [2024-10-28 15:30:14.064757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.301 [2024-10-28 15:30:14.064784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.301 [2024-10-28 15:30:14.064799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.301 [2024-10-28 15:30:14.064812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.301 [2024-10-28 15:30:14.064845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.301 qpair failed and we were unable to recover it. 00:34:27.301 [2024-10-28 15:30:14.074738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.301 [2024-10-28 15:30:14.074832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.301 [2024-10-28 15:30:14.074864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.301 [2024-10-28 15:30:14.074880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.301 [2024-10-28 15:30:14.074894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.301 [2024-10-28 15:30:14.074935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.301 qpair failed and we were unable to recover it. 00:34:27.301 [2024-10-28 15:30:14.084733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.301 [2024-10-28 15:30:14.084822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.301 [2024-10-28 15:30:14.084849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.301 [2024-10-28 15:30:14.084864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.301 [2024-10-28 15:30:14.084877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.301 [2024-10-28 15:30:14.084909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.301 qpair failed and we were unable to recover it. 00:34:27.301 [2024-10-28 15:30:14.094826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.301 [2024-10-28 15:30:14.094928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.301 [2024-10-28 15:30:14.094970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.301 [2024-10-28 15:30:14.094985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.301 [2024-10-28 15:30:14.095000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.301 [2024-10-28 15:30:14.095030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.301 qpair failed and we were unable to recover it. 00:34:27.301 [2024-10-28 15:30:14.104763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.301 [2024-10-28 15:30:14.104850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.301 [2024-10-28 15:30:14.104877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.301 [2024-10-28 15:30:14.104892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.301 [2024-10-28 15:30:14.104904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.301 [2024-10-28 15:30:14.104951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.301 qpair failed and we were unable to recover it. 00:34:27.301 [2024-10-28 15:30:14.114791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.301 [2024-10-28 15:30:14.114880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.301 [2024-10-28 15:30:14.114905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.301 [2024-10-28 15:30:14.114920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.301 [2024-10-28 15:30:14.114939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.301 [2024-10-28 15:30:14.114986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.301 qpair failed and we were unable to recover it. 00:34:27.301 [2024-10-28 15:30:14.124862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.301 [2024-10-28 15:30:14.124951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.301 [2024-10-28 15:30:14.124990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.301 [2024-10-28 15:30:14.125004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.301 [2024-10-28 15:30:14.125017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.301 [2024-10-28 15:30:14.125048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.301 qpair failed and we were unable to recover it. 00:34:27.301 [2024-10-28 15:30:14.134871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.301 [2024-10-28 15:30:14.134979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.301 [2024-10-28 15:30:14.135004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.301 [2024-10-28 15:30:14.135019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.301 [2024-10-28 15:30:14.135031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.301 [2024-10-28 15:30:14.135061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.301 qpair failed and we were unable to recover it. 00:34:27.301 [2024-10-28 15:30:14.144960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.301 [2024-10-28 15:30:14.145089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.301 [2024-10-28 15:30:14.145115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.301 [2024-10-28 15:30:14.145129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.301 [2024-10-28 15:30:14.145153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.301 [2024-10-28 15:30:14.145183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.301 qpair failed and we were unable to recover it. 00:34:27.301 [2024-10-28 15:30:14.154989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.301 [2024-10-28 15:30:14.155113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.301 [2024-10-28 15:30:14.155139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.301 [2024-10-28 15:30:14.155153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.301 [2024-10-28 15:30:14.155175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.301 [2024-10-28 15:30:14.155222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.301 qpair failed and we were unable to recover it. 00:34:27.562 [2024-10-28 15:30:14.164951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.562 [2024-10-28 15:30:14.165056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.562 [2024-10-28 15:30:14.165083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.562 [2024-10-28 15:30:14.165098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.562 [2024-10-28 15:30:14.165111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.563 [2024-10-28 15:30:14.165144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.563 qpair failed and we were unable to recover it. 00:34:27.563 [2024-10-28 15:30:14.175006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.563 [2024-10-28 15:30:14.175106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.563 [2024-10-28 15:30:14.175134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.563 [2024-10-28 15:30:14.175149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.563 [2024-10-28 15:30:14.175162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.563 [2024-10-28 15:30:14.175193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.563 qpair failed and we were unable to recover it. 00:34:27.563 [2024-10-28 15:30:14.184997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.563 [2024-10-28 15:30:14.185109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.563 [2024-10-28 15:30:14.185136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.563 [2024-10-28 15:30:14.185152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.563 [2024-10-28 15:30:14.185165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.563 [2024-10-28 15:30:14.185195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.563 qpair failed and we were unable to recover it. 00:34:27.563 [2024-10-28 15:30:14.195011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.563 [2024-10-28 15:30:14.195104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.563 [2024-10-28 15:30:14.195130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.563 [2024-10-28 15:30:14.195145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.563 [2024-10-28 15:30:14.195159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.563 [2024-10-28 15:30:14.195191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.563 qpair failed and we were unable to recover it. 00:34:27.563 [2024-10-28 15:30:14.205072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.563 [2024-10-28 15:30:14.205158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.563 [2024-10-28 15:30:14.205188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.563 [2024-10-28 15:30:14.205203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.563 [2024-10-28 15:30:14.205215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.563 [2024-10-28 15:30:14.205246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.563 qpair failed and we were unable to recover it. 00:34:27.563 [2024-10-28 15:30:14.215148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.563 [2024-10-28 15:30:14.215238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.563 [2024-10-28 15:30:14.215262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.563 [2024-10-28 15:30:14.215277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.563 [2024-10-28 15:30:14.215291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.563 [2024-10-28 15:30:14.215321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.563 qpair failed and we were unable to recover it. 00:34:27.563 [2024-10-28 15:30:14.225215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.563 [2024-10-28 15:30:14.225324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.563 [2024-10-28 15:30:14.225348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.563 [2024-10-28 15:30:14.225363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.563 [2024-10-28 15:30:14.225375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.563 [2024-10-28 15:30:14.225407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.563 qpair failed and we were unable to recover it. 00:34:27.563 [2024-10-28 15:30:14.235172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.563 [2024-10-28 15:30:14.235258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.563 [2024-10-28 15:30:14.235282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.563 [2024-10-28 15:30:14.235298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.563 [2024-10-28 15:30:14.235310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.563 [2024-10-28 15:30:14.235341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.563 qpair failed and we were unable to recover it. 00:34:27.563 [2024-10-28 15:30:14.245261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.563 [2024-10-28 15:30:14.245358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.563 [2024-10-28 15:30:14.245382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.563 [2024-10-28 15:30:14.245403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.563 [2024-10-28 15:30:14.245417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.563 [2024-10-28 15:30:14.245458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.563 qpair failed and we were unable to recover it. 00:34:27.563 [2024-10-28 15:30:14.255212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.563 [2024-10-28 15:30:14.255316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.563 [2024-10-28 15:30:14.255341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.563 [2024-10-28 15:30:14.255356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.563 [2024-10-28 15:30:14.255369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.563 [2024-10-28 15:30:14.255401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.563 qpair failed and we were unable to recover it. 00:34:27.563 [2024-10-28 15:30:14.265258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.563 [2024-10-28 15:30:14.265353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.563 [2024-10-28 15:30:14.265377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.563 [2024-10-28 15:30:14.265391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.563 [2024-10-28 15:30:14.265403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.563 [2024-10-28 15:30:14.265434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.563 qpair failed and we were unable to recover it. 00:34:27.563 [2024-10-28 15:30:14.275291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.563 [2024-10-28 15:30:14.275376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.563 [2024-10-28 15:30:14.275401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.563 [2024-10-28 15:30:14.275416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.563 [2024-10-28 15:30:14.275429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.563 [2024-10-28 15:30:14.275460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.563 qpair failed and we were unable to recover it. 00:34:27.563 [2024-10-28 15:30:14.285311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.563 [2024-10-28 15:30:14.285399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.563 [2024-10-28 15:30:14.285426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.563 [2024-10-28 15:30:14.285440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.563 [2024-10-28 15:30:14.285453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.563 [2024-10-28 15:30:14.285490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.563 qpair failed and we were unable to recover it. 00:34:27.563 [2024-10-28 15:30:14.295368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.563 [2024-10-28 15:30:14.295460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.563 [2024-10-28 15:30:14.295487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.563 [2024-10-28 15:30:14.295504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.563 [2024-10-28 15:30:14.295517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.563 [2024-10-28 15:30:14.295547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.563 qpair failed and we were unable to recover it. 00:34:27.563 [2024-10-28 15:30:14.305363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.563 [2024-10-28 15:30:14.305455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.563 [2024-10-28 15:30:14.305480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.564 [2024-10-28 15:30:14.305495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.564 [2024-10-28 15:30:14.305507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.564 [2024-10-28 15:30:14.305536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.564 qpair failed and we were unable to recover it. 00:34:27.564 [2024-10-28 15:30:14.315355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.564 [2024-10-28 15:30:14.315488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.564 [2024-10-28 15:30:14.315514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.564 [2024-10-28 15:30:14.315528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.564 [2024-10-28 15:30:14.315541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.564 [2024-10-28 15:30:14.315571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.564 qpair failed and we were unable to recover it. 00:34:27.564 [2024-10-28 15:30:14.325360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.564 [2024-10-28 15:30:14.325446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.564 [2024-10-28 15:30:14.325471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.564 [2024-10-28 15:30:14.325486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.564 [2024-10-28 15:30:14.325499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.564 [2024-10-28 15:30:14.325529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.564 qpair failed and we were unable to recover it. 00:34:27.564 [2024-10-28 15:30:14.335437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.564 [2024-10-28 15:30:14.335542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.564 [2024-10-28 15:30:14.335573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.564 [2024-10-28 15:30:14.335588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.564 [2024-10-28 15:30:14.335601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.564 [2024-10-28 15:30:14.335660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.564 qpair failed and we were unable to recover it. 00:34:27.564 [2024-10-28 15:30:14.345424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.564 [2024-10-28 15:30:14.345511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.564 [2024-10-28 15:30:14.345537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.564 [2024-10-28 15:30:14.345552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.564 [2024-10-28 15:30:14.345565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.564 [2024-10-28 15:30:14.345596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.564 qpair failed and we were unable to recover it. 00:34:27.564 [2024-10-28 15:30:14.355487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.564 [2024-10-28 15:30:14.355576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.564 [2024-10-28 15:30:14.355602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.564 [2024-10-28 15:30:14.355617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.564 [2024-10-28 15:30:14.355630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.564 [2024-10-28 15:30:14.355681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.564 qpair failed and we were unable to recover it. 00:34:27.564 [2024-10-28 15:30:14.365526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.564 [2024-10-28 15:30:14.365662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.564 [2024-10-28 15:30:14.365689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.564 [2024-10-28 15:30:14.365704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.564 [2024-10-28 15:30:14.365727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.564 [2024-10-28 15:30:14.365759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.564 qpair failed and we were unable to recover it. 00:34:27.564 [2024-10-28 15:30:14.375531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.564 [2024-10-28 15:30:14.375622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.564 [2024-10-28 15:30:14.375670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.564 [2024-10-28 15:30:14.375695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.564 [2024-10-28 15:30:14.375709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.564 [2024-10-28 15:30:14.375742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.564 qpair failed and we were unable to recover it. 00:34:27.564 [2024-10-28 15:30:14.385584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.564 [2024-10-28 15:30:14.385689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.564 [2024-10-28 15:30:14.385714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.564 [2024-10-28 15:30:14.385729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.564 [2024-10-28 15:30:14.385744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.564 [2024-10-28 15:30:14.385776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.564 qpair failed and we were unable to recover it. 00:34:27.564 [2024-10-28 15:30:14.395645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.564 [2024-10-28 15:30:14.395772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.564 [2024-10-28 15:30:14.395799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.564 [2024-10-28 15:30:14.395815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.564 [2024-10-28 15:30:14.395828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.564 [2024-10-28 15:30:14.395860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.564 qpair failed and we were unable to recover it. 00:34:27.564 [2024-10-28 15:30:14.405604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.564 [2024-10-28 15:30:14.405719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.564 [2024-10-28 15:30:14.405746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.564 [2024-10-28 15:30:14.405761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.564 [2024-10-28 15:30:14.405774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.564 [2024-10-28 15:30:14.405807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.564 qpair failed and we were unable to recover it. 00:34:27.564 [2024-10-28 15:30:14.415692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.564 [2024-10-28 15:30:14.415826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.564 [2024-10-28 15:30:14.415853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.564 [2024-10-28 15:30:14.415869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.564 [2024-10-28 15:30:14.415882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.564 [2024-10-28 15:30:14.415921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.564 qpair failed and we were unable to recover it. 00:34:27.564 [2024-10-28 15:30:14.425719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.564 [2024-10-28 15:30:14.425816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.564 [2024-10-28 15:30:14.425844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.564 [2024-10-28 15:30:14.425859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.564 [2024-10-28 15:30:14.425872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.564 [2024-10-28 15:30:14.425905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.564 qpair failed and we were unable to recover it. 00:34:27.825 [2024-10-28 15:30:14.435767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.825 [2024-10-28 15:30:14.435856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.825 [2024-10-28 15:30:14.435883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.825 [2024-10-28 15:30:14.435898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.825 [2024-10-28 15:30:14.435912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.825 [2024-10-28 15:30:14.435960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.825 qpair failed and we were unable to recover it. 00:34:27.825 [2024-10-28 15:30:14.445738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.825 [2024-10-28 15:30:14.445838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.825 [2024-10-28 15:30:14.445865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.825 [2024-10-28 15:30:14.445881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.825 [2024-10-28 15:30:14.445894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.825 [2024-10-28 15:30:14.445926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.825 qpair failed and we were unable to recover it. 00:34:27.825 [2024-10-28 15:30:14.455838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.825 [2024-10-28 15:30:14.455990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.825 [2024-10-28 15:30:14.456027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.825 [2024-10-28 15:30:14.456043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.825 [2024-10-28 15:30:14.456056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.825 [2024-10-28 15:30:14.456087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.825 qpair failed and we were unable to recover it. 00:34:27.825 [2024-10-28 15:30:14.465855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.825 [2024-10-28 15:30:14.465953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.825 [2024-10-28 15:30:14.465992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.825 [2024-10-28 15:30:14.466006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.825 [2024-10-28 15:30:14.466019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.825 [2024-10-28 15:30:14.466050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.825 qpair failed and we were unable to recover it. 00:34:27.825 [2024-10-28 15:30:14.475867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.825 [2024-10-28 15:30:14.475975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.825 [2024-10-28 15:30:14.476001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.825 [2024-10-28 15:30:14.476016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.825 [2024-10-28 15:30:14.476029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.825 [2024-10-28 15:30:14.476068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.825 qpair failed and we were unable to recover it. 00:34:27.825 [2024-10-28 15:30:14.485896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.825 [2024-10-28 15:30:14.486043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.825 [2024-10-28 15:30:14.486069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.825 [2024-10-28 15:30:14.486089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.825 [2024-10-28 15:30:14.486102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.825 [2024-10-28 15:30:14.486133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.825 qpair failed and we were unable to recover it. 00:34:27.825 [2024-10-28 15:30:14.495912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.825 [2024-10-28 15:30:14.496025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.825 [2024-10-28 15:30:14.496052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.826 [2024-10-28 15:30:14.496066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.826 [2024-10-28 15:30:14.496079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.826 [2024-10-28 15:30:14.496109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.826 qpair failed and we were unable to recover it. 00:34:27.826 [2024-10-28 15:30:14.505950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.826 [2024-10-28 15:30:14.506049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.826 [2024-10-28 15:30:14.506079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.826 [2024-10-28 15:30:14.506095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.826 [2024-10-28 15:30:14.506108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.826 [2024-10-28 15:30:14.506140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.826 qpair failed and we were unable to recover it. 00:34:27.826 [2024-10-28 15:30:14.516044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.826 [2024-10-28 15:30:14.516150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.826 [2024-10-28 15:30:14.516176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.826 [2024-10-28 15:30:14.516191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.826 [2024-10-28 15:30:14.516203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.826 [2024-10-28 15:30:14.516234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.826 qpair failed and we were unable to recover it. 00:34:27.826 [2024-10-28 15:30:14.525997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.826 [2024-10-28 15:30:14.526084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.826 [2024-10-28 15:30:14.526110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.826 [2024-10-28 15:30:14.526125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.826 [2024-10-28 15:30:14.526137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.826 [2024-10-28 15:30:14.526167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.826 qpair failed and we were unable to recover it. 00:34:27.826 [2024-10-28 15:30:14.536052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.826 [2024-10-28 15:30:14.536142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.826 [2024-10-28 15:30:14.536166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.826 [2024-10-28 15:30:14.536181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.826 [2024-10-28 15:30:14.536194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.826 [2024-10-28 15:30:14.536224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.826 qpair failed and we were unable to recover it. 00:34:27.826 [2024-10-28 15:30:14.546099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.826 [2024-10-28 15:30:14.546184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.826 [2024-10-28 15:30:14.546208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.826 [2024-10-28 15:30:14.546223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.826 [2024-10-28 15:30:14.546240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.826 [2024-10-28 15:30:14.546271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.826 qpair failed and we were unable to recover it. 00:34:27.826 [2024-10-28 15:30:14.556087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.826 [2024-10-28 15:30:14.556209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.826 [2024-10-28 15:30:14.556236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.826 [2024-10-28 15:30:14.556252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.826 [2024-10-28 15:30:14.556265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.826 [2024-10-28 15:30:14.556296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.826 qpair failed and we were unable to recover it. 00:34:27.826 [2024-10-28 15:30:14.566111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.826 [2024-10-28 15:30:14.566221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.826 [2024-10-28 15:30:14.566246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.826 [2024-10-28 15:30:14.566261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.826 [2024-10-28 15:30:14.566273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.826 [2024-10-28 15:30:14.566304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.826 qpair failed and we were unable to recover it. 00:34:27.826 [2024-10-28 15:30:14.576118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.826 [2024-10-28 15:30:14.576208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.826 [2024-10-28 15:30:14.576234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.826 [2024-10-28 15:30:14.576249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.826 [2024-10-28 15:30:14.576261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.826 [2024-10-28 15:30:14.576292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.826 qpair failed and we were unable to recover it. 00:34:27.826 [2024-10-28 15:30:14.586201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.826 [2024-10-28 15:30:14.586300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.826 [2024-10-28 15:30:14.586325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.826 [2024-10-28 15:30:14.586339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.826 [2024-10-28 15:30:14.586352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.826 [2024-10-28 15:30:14.586382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.826 qpair failed and we were unable to recover it. 00:34:27.826 [2024-10-28 15:30:14.596173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.826 [2024-10-28 15:30:14.596257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.826 [2024-10-28 15:30:14.596282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.826 [2024-10-28 15:30:14.596297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.826 [2024-10-28 15:30:14.596309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.826 [2024-10-28 15:30:14.596341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.826 qpair failed and we were unable to recover it. 00:34:27.826 [2024-10-28 15:30:14.606200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.826 [2024-10-28 15:30:14.606291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.826 [2024-10-28 15:30:14.606315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.826 [2024-10-28 15:30:14.606329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.826 [2024-10-28 15:30:14.606342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.826 [2024-10-28 15:30:14.606373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.826 qpair failed and we were unable to recover it. 00:34:27.826 [2024-10-28 15:30:14.616269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.826 [2024-10-28 15:30:14.616364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.826 [2024-10-28 15:30:14.616390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.826 [2024-10-28 15:30:14.616404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.826 [2024-10-28 15:30:14.616416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.826 [2024-10-28 15:30:14.616447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.826 qpair failed and we were unable to recover it. 00:34:27.826 [2024-10-28 15:30:14.626273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.826 [2024-10-28 15:30:14.626354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.826 [2024-10-28 15:30:14.626381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.826 [2024-10-28 15:30:14.626395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.826 [2024-10-28 15:30:14.626408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.826 [2024-10-28 15:30:14.626439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.826 qpair failed and we were unable to recover it. 00:34:27.826 [2024-10-28 15:30:14.636294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.827 [2024-10-28 15:30:14.636389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.827 [2024-10-28 15:30:14.636421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.827 [2024-10-28 15:30:14.636437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.827 [2024-10-28 15:30:14.636450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.827 [2024-10-28 15:30:14.636481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.827 qpair failed and we were unable to recover it. 00:34:27.827 [2024-10-28 15:30:14.646418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.827 [2024-10-28 15:30:14.646507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.827 [2024-10-28 15:30:14.646533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.827 [2024-10-28 15:30:14.646548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.827 [2024-10-28 15:30:14.646561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.827 [2024-10-28 15:30:14.646593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.827 qpair failed and we were unable to recover it. 00:34:27.827 [2024-10-28 15:30:14.656336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.827 [2024-10-28 15:30:14.656426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.827 [2024-10-28 15:30:14.656452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.827 [2024-10-28 15:30:14.656467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.827 [2024-10-28 15:30:14.656481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.827 [2024-10-28 15:30:14.656512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.827 qpair failed and we were unable to recover it. 00:34:27.827 [2024-10-28 15:30:14.666421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.827 [2024-10-28 15:30:14.666511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.827 [2024-10-28 15:30:14.666537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.827 [2024-10-28 15:30:14.666552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.827 [2024-10-28 15:30:14.666565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.827 [2024-10-28 15:30:14.666595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.827 qpair failed and we were unable to recover it. 00:34:27.827 [2024-10-28 15:30:14.676408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.827 [2024-10-28 15:30:14.676494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.827 [2024-10-28 15:30:14.676520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.827 [2024-10-28 15:30:14.676535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.827 [2024-10-28 15:30:14.676555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.827 [2024-10-28 15:30:14.676588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.827 qpair failed and we were unable to recover it. 00:34:27.827 [2024-10-28 15:30:14.686466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.827 [2024-10-28 15:30:14.686590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.827 [2024-10-28 15:30:14.686619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.827 [2024-10-28 15:30:14.686635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.827 [2024-10-28 15:30:14.686648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:27.827 [2024-10-28 15:30:14.686690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.827 qpair failed and we were unable to recover it. 00:34:28.088 [2024-10-28 15:30:14.696485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.088 [2024-10-28 15:30:14.696577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.088 [2024-10-28 15:30:14.696606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.088 [2024-10-28 15:30:14.696621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.088 [2024-10-28 15:30:14.696658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.088 [2024-10-28 15:30:14.696694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.088 qpair failed and we were unable to recover it. 00:34:28.088 [2024-10-28 15:30:14.706541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.088 [2024-10-28 15:30:14.706648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.088 [2024-10-28 15:30:14.706681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.088 [2024-10-28 15:30:14.706700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.088 [2024-10-28 15:30:14.706713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.088 [2024-10-28 15:30:14.706745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.088 qpair failed and we were unable to recover it. 00:34:28.088 [2024-10-28 15:30:14.716538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.088 [2024-10-28 15:30:14.716627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.088 [2024-10-28 15:30:14.716658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.088 [2024-10-28 15:30:14.716702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.088 [2024-10-28 15:30:14.716715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.088 [2024-10-28 15:30:14.716757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.088 qpair failed and we were unable to recover it. 00:34:28.088 [2024-10-28 15:30:14.726588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.088 [2024-10-28 15:30:14.726728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.088 [2024-10-28 15:30:14.726756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.088 [2024-10-28 15:30:14.726771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.088 [2024-10-28 15:30:14.726785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.088 [2024-10-28 15:30:14.726818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.088 qpair failed and we were unable to recover it. 00:34:28.088 [2024-10-28 15:30:14.736675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.088 [2024-10-28 15:30:14.736782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.088 [2024-10-28 15:30:14.736810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.088 [2024-10-28 15:30:14.736825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.088 [2024-10-28 15:30:14.736839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.088 [2024-10-28 15:30:14.736872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.088 qpair failed and we were unable to recover it. 00:34:28.088 [2024-10-28 15:30:14.746644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.088 [2024-10-28 15:30:14.746744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.088 [2024-10-28 15:30:14.746769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.088 [2024-10-28 15:30:14.746784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.088 [2024-10-28 15:30:14.746797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.088 [2024-10-28 15:30:14.746830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.088 qpair failed and we were unable to recover it. 00:34:28.088 [2024-10-28 15:30:14.756699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.088 [2024-10-28 15:30:14.756792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.088 [2024-10-28 15:30:14.756818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.088 [2024-10-28 15:30:14.756833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.088 [2024-10-28 15:30:14.756845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.088 [2024-10-28 15:30:14.756878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.088 qpair failed and we were unable to recover it. 00:34:28.088 [2024-10-28 15:30:14.766696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.088 [2024-10-28 15:30:14.766783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.088 [2024-10-28 15:30:14.766814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.088 [2024-10-28 15:30:14.766830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.088 [2024-10-28 15:30:14.766842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.088 [2024-10-28 15:30:14.766886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.088 qpair failed and we were unable to recover it. 00:34:28.088 [2024-10-28 15:30:14.776719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.088 [2024-10-28 15:30:14.776810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.088 [2024-10-28 15:30:14.776837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.088 [2024-10-28 15:30:14.776852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.089 [2024-10-28 15:30:14.776865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.089 [2024-10-28 15:30:14.776896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.089 qpair failed and we were unable to recover it. 00:34:28.089 [2024-10-28 15:30:14.786768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.089 [2024-10-28 15:30:14.786907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.089 [2024-10-28 15:30:14.786934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.089 [2024-10-28 15:30:14.786950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.089 [2024-10-28 15:30:14.786962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.089 [2024-10-28 15:30:14.786994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.089 qpair failed and we were unable to recover it. 00:34:28.089 [2024-10-28 15:30:14.796798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.089 [2024-10-28 15:30:14.796889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.089 [2024-10-28 15:30:14.796914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.089 [2024-10-28 15:30:14.796928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.089 [2024-10-28 15:30:14.796942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.089 [2024-10-28 15:30:14.796973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.089 qpair failed and we were unable to recover it. 00:34:28.089 [2024-10-28 15:30:14.806836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.089 [2024-10-28 15:30:14.806948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.089 [2024-10-28 15:30:14.806973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.089 [2024-10-28 15:30:14.806993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.089 [2024-10-28 15:30:14.807008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.089 [2024-10-28 15:30:14.807039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.089 qpair failed and we were unable to recover it. 00:34:28.089 [2024-10-28 15:30:14.816857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.089 [2024-10-28 15:30:14.816951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.089 [2024-10-28 15:30:14.816991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.089 [2024-10-28 15:30:14.817006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.089 [2024-10-28 15:30:14.817018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.089 [2024-10-28 15:30:14.817049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.089 qpair failed and we were unable to recover it. 00:34:28.089 [2024-10-28 15:30:14.826888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.089 [2024-10-28 15:30:14.826993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.089 [2024-10-28 15:30:14.827017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.089 [2024-10-28 15:30:14.827031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.089 [2024-10-28 15:30:14.827044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.089 [2024-10-28 15:30:14.827075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.089 qpair failed and we were unable to recover it. 00:34:28.089 [2024-10-28 15:30:14.836906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.089 [2024-10-28 15:30:14.837021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.089 [2024-10-28 15:30:14.837048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.089 [2024-10-28 15:30:14.837064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.089 [2024-10-28 15:30:14.837077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.089 [2024-10-28 15:30:14.837107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.089 qpair failed and we were unable to recover it. 00:34:28.089 [2024-10-28 15:30:14.846877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.089 [2024-10-28 15:30:14.846980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.089 [2024-10-28 15:30:14.847021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.089 [2024-10-28 15:30:14.847037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.089 [2024-10-28 15:30:14.847049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.089 [2024-10-28 15:30:14.847088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.089 qpair failed and we were unable to recover it. 00:34:28.089 [2024-10-28 15:30:14.856997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.089 [2024-10-28 15:30:14.857098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.089 [2024-10-28 15:30:14.857125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.089 [2024-10-28 15:30:14.857140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.089 [2024-10-28 15:30:14.857153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.089 [2024-10-28 15:30:14.857183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.089 qpair failed and we were unable to recover it. 00:34:28.089 [2024-10-28 15:30:14.866989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.089 [2024-10-28 15:30:14.867074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.089 [2024-10-28 15:30:14.867100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.089 [2024-10-28 15:30:14.867115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.089 [2024-10-28 15:30:14.867127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.089 [2024-10-28 15:30:14.867159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.089 qpair failed and we were unable to recover it. 00:34:28.089 [2024-10-28 15:30:14.877021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.089 [2024-10-28 15:30:14.877112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.089 [2024-10-28 15:30:14.877138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.089 [2024-10-28 15:30:14.877152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.089 [2024-10-28 15:30:14.877165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.089 [2024-10-28 15:30:14.877196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.089 qpair failed and we were unable to recover it. 00:34:28.089 [2024-10-28 15:30:14.887060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.089 [2024-10-28 15:30:14.887187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.089 [2024-10-28 15:30:14.887213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.089 [2024-10-28 15:30:14.887227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.089 [2024-10-28 15:30:14.887241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.089 [2024-10-28 15:30:14.887272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.089 qpair failed and we were unable to recover it. 00:34:28.089 [2024-10-28 15:30:14.897142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.089 [2024-10-28 15:30:14.897247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.089 [2024-10-28 15:30:14.897273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.089 [2024-10-28 15:30:14.897288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.089 [2024-10-28 15:30:14.897301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.089 [2024-10-28 15:30:14.897333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.089 qpair failed and we were unable to recover it. 00:34:28.089 [2024-10-28 15:30:14.907106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.089 [2024-10-28 15:30:14.907228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.089 [2024-10-28 15:30:14.907255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.089 [2024-10-28 15:30:14.907270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.089 [2024-10-28 15:30:14.907282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.089 [2024-10-28 15:30:14.907314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.089 qpair failed and we were unable to recover it. 00:34:28.089 [2024-10-28 15:30:14.917206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.089 [2024-10-28 15:30:14.917296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.090 [2024-10-28 15:30:14.917320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.090 [2024-10-28 15:30:14.917335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.090 [2024-10-28 15:30:14.917348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.090 [2024-10-28 15:30:14.917378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.090 qpair failed and we were unable to recover it. 00:34:28.090 [2024-10-28 15:30:14.927196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.090 [2024-10-28 15:30:14.927281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.090 [2024-10-28 15:30:14.927306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.090 [2024-10-28 15:30:14.927319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.090 [2024-10-28 15:30:14.927332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.090 [2024-10-28 15:30:14.927363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.090 qpair failed and we were unable to recover it. 00:34:28.090 [2024-10-28 15:30:14.937271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.090 [2024-10-28 15:30:14.937361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.090 [2024-10-28 15:30:14.937387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.090 [2024-10-28 15:30:14.937407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.090 [2024-10-28 15:30:14.937421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.090 [2024-10-28 15:30:14.937463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.090 qpair failed and we were unable to recover it. 00:34:28.090 [2024-10-28 15:30:14.947250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.090 [2024-10-28 15:30:14.947345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.090 [2024-10-28 15:30:14.947373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.090 [2024-10-28 15:30:14.947389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.090 [2024-10-28 15:30:14.947402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.090 [2024-10-28 15:30:14.947434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.090 qpair failed and we were unable to recover it. 00:34:28.350 [2024-10-28 15:30:14.957276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.350 [2024-10-28 15:30:14.957389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.350 [2024-10-28 15:30:14.957416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.350 [2024-10-28 15:30:14.957433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.350 [2024-10-28 15:30:14.957446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.350 [2024-10-28 15:30:14.957480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.350 qpair failed and we were unable to recover it. 00:34:28.350 [2024-10-28 15:30:14.967254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.350 [2024-10-28 15:30:14.967344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.350 [2024-10-28 15:30:14.967372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.350 [2024-10-28 15:30:14.967388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.350 [2024-10-28 15:30:14.967401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.350 [2024-10-28 15:30:14.967431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.350 qpair failed and we were unable to recover it. 00:34:28.350 [2024-10-28 15:30:14.977335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.350 [2024-10-28 15:30:14.977429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.350 [2024-10-28 15:30:14.977454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.350 [2024-10-28 15:30:14.977468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.350 [2024-10-28 15:30:14.977482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.350 [2024-10-28 15:30:14.977519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.350 qpair failed and we were unable to recover it. 00:34:28.350 [2024-10-28 15:30:14.987356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.350 [2024-10-28 15:30:14.987446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.350 [2024-10-28 15:30:14.987471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.350 [2024-10-28 15:30:14.987486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.350 [2024-10-28 15:30:14.987499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.350 [2024-10-28 15:30:14.987530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.350 qpair failed and we were unable to recover it. 00:34:28.350 [2024-10-28 15:30:14.997372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.350 [2024-10-28 15:30:14.997457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.350 [2024-10-28 15:30:14.997483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.350 [2024-10-28 15:30:14.997499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.350 [2024-10-28 15:30:14.997512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.350 [2024-10-28 15:30:14.997543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.350 qpair failed and we were unable to recover it. 00:34:28.350 [2024-10-28 15:30:15.007407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.350 [2024-10-28 15:30:15.007498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.350 [2024-10-28 15:30:15.007525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.350 [2024-10-28 15:30:15.007540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.350 [2024-10-28 15:30:15.007553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.350 [2024-10-28 15:30:15.007583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.350 qpair failed and we were unable to recover it. 00:34:28.350 [2024-10-28 15:30:15.017434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.350 [2024-10-28 15:30:15.017533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.350 [2024-10-28 15:30:15.017558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.350 [2024-10-28 15:30:15.017573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.350 [2024-10-28 15:30:15.017587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.350 [2024-10-28 15:30:15.017618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.350 qpair failed and we were unable to recover it. 00:34:28.350 [2024-10-28 15:30:15.027465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.350 [2024-10-28 15:30:15.027555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.350 [2024-10-28 15:30:15.027580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.350 [2024-10-28 15:30:15.027594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.350 [2024-10-28 15:30:15.027606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.350 [2024-10-28 15:30:15.027660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.350 qpair failed and we were unable to recover it. 00:34:28.351 [2024-10-28 15:30:15.037478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.351 [2024-10-28 15:30:15.037579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.351 [2024-10-28 15:30:15.037605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.351 [2024-10-28 15:30:15.037620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.351 [2024-10-28 15:30:15.037657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.351 [2024-10-28 15:30:15.037694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.351 qpair failed and we were unable to recover it. 00:34:28.351 [2024-10-28 15:30:15.047505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.351 [2024-10-28 15:30:15.047599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.351 [2024-10-28 15:30:15.047623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.351 [2024-10-28 15:30:15.047662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.351 [2024-10-28 15:30:15.047678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.351 [2024-10-28 15:30:15.047710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.351 qpair failed and we were unable to recover it. 00:34:28.351 [2024-10-28 15:30:15.057563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.351 [2024-10-28 15:30:15.057679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.351 [2024-10-28 15:30:15.057706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.351 [2024-10-28 15:30:15.057722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.351 [2024-10-28 15:30:15.057734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.351 [2024-10-28 15:30:15.057774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.351 qpair failed and we were unable to recover it. 00:34:28.351 [2024-10-28 15:30:15.067534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.351 [2024-10-28 15:30:15.067620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.351 [2024-10-28 15:30:15.067673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.351 [2024-10-28 15:30:15.067689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.351 [2024-10-28 15:30:15.067703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.351 [2024-10-28 15:30:15.067735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.351 qpair failed and we were unable to recover it. 00:34:28.351 [2024-10-28 15:30:15.077601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.351 [2024-10-28 15:30:15.077715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.351 [2024-10-28 15:30:15.077741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.351 [2024-10-28 15:30:15.077756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.351 [2024-10-28 15:30:15.077770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.351 [2024-10-28 15:30:15.077801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.351 qpair failed and we were unable to recover it. 00:34:28.351 [2024-10-28 15:30:15.087562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.351 [2024-10-28 15:30:15.087680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.351 [2024-10-28 15:30:15.087708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.351 [2024-10-28 15:30:15.087723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.351 [2024-10-28 15:30:15.087736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.351 [2024-10-28 15:30:15.087768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.351 qpair failed and we were unable to recover it. 00:34:28.351 [2024-10-28 15:30:15.097672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.351 [2024-10-28 15:30:15.097811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.351 [2024-10-28 15:30:15.097838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.351 [2024-10-28 15:30:15.097854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.351 [2024-10-28 15:30:15.097866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.351 [2024-10-28 15:30:15.097898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.351 qpair failed and we were unable to recover it. 00:34:28.351 [2024-10-28 15:30:15.107657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.351 [2024-10-28 15:30:15.107754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.351 [2024-10-28 15:30:15.107781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.351 [2024-10-28 15:30:15.107796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.351 [2024-10-28 15:30:15.107816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.351 [2024-10-28 15:30:15.107849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.351 qpair failed and we were unable to recover it. 00:34:28.351 [2024-10-28 15:30:15.117690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.351 [2024-10-28 15:30:15.117785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.351 [2024-10-28 15:30:15.117811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.351 [2024-10-28 15:30:15.117827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.351 [2024-10-28 15:30:15.117840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.351 [2024-10-28 15:30:15.117873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.351 qpair failed and we were unable to recover it. 00:34:28.351 [2024-10-28 15:30:15.127726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.351 [2024-10-28 15:30:15.127814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.351 [2024-10-28 15:30:15.127840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.351 [2024-10-28 15:30:15.127856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.351 [2024-10-28 15:30:15.127869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.351 [2024-10-28 15:30:15.127914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.351 qpair failed and we were unable to recover it. 00:34:28.351 [2024-10-28 15:30:15.137763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.351 [2024-10-28 15:30:15.137852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.351 [2024-10-28 15:30:15.137878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.351 [2024-10-28 15:30:15.137894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.351 [2024-10-28 15:30:15.137911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.351 [2024-10-28 15:30:15.137942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.351 qpair failed and we were unable to recover it. 00:34:28.351 [2024-10-28 15:30:15.147806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.351 [2024-10-28 15:30:15.147894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.351 [2024-10-28 15:30:15.147922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.351 [2024-10-28 15:30:15.147951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.351 [2024-10-28 15:30:15.147965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.351 [2024-10-28 15:30:15.147996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.351 qpair failed and we were unable to recover it. 00:34:28.351 [2024-10-28 15:30:15.157790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.351 [2024-10-28 15:30:15.157878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.351 [2024-10-28 15:30:15.157906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.351 [2024-10-28 15:30:15.157921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.351 [2024-10-28 15:30:15.157935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.351 [2024-10-28 15:30:15.157968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.351 qpair failed and we were unable to recover it. 00:34:28.351 [2024-10-28 15:30:15.167925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.351 [2024-10-28 15:30:15.168029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.351 [2024-10-28 15:30:15.168055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.352 [2024-10-28 15:30:15.168070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.352 [2024-10-28 15:30:15.168083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.352 [2024-10-28 15:30:15.168114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.352 qpair failed and we were unable to recover it. 00:34:28.352 [2024-10-28 15:30:15.177871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.352 [2024-10-28 15:30:15.177966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.352 [2024-10-28 15:30:15.178008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.352 [2024-10-28 15:30:15.178023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.352 [2024-10-28 15:30:15.178035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.352 [2024-10-28 15:30:15.178066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.352 qpair failed and we were unable to recover it. 00:34:28.352 [2024-10-28 15:30:15.187874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.352 [2024-10-28 15:30:15.187979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.352 [2024-10-28 15:30:15.188004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.352 [2024-10-28 15:30:15.188019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.352 [2024-10-28 15:30:15.188031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.352 [2024-10-28 15:30:15.188061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.352 qpair failed and we were unable to recover it. 00:34:28.352 [2024-10-28 15:30:15.197896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.352 [2024-10-28 15:30:15.197996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.352 [2024-10-28 15:30:15.198027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.352 [2024-10-28 15:30:15.198043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.352 [2024-10-28 15:30:15.198055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.352 [2024-10-28 15:30:15.198084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.352 qpair failed and we were unable to recover it. 00:34:28.352 [2024-10-28 15:30:15.208061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.352 [2024-10-28 15:30:15.208149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.352 [2024-10-28 15:30:15.208176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.352 [2024-10-28 15:30:15.208190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.352 [2024-10-28 15:30:15.208203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.352 [2024-10-28 15:30:15.208234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.352 qpair failed and we were unable to recover it. 00:34:28.612 [2024-10-28 15:30:15.218029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.612 [2024-10-28 15:30:15.218121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.612 [2024-10-28 15:30:15.218148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.612 [2024-10-28 15:30:15.218163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.612 [2024-10-28 15:30:15.218176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.612 [2024-10-28 15:30:15.218209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.612 qpair failed and we were unable to recover it. 00:34:28.612 [2024-10-28 15:30:15.228080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.612 [2024-10-28 15:30:15.228172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.612 [2024-10-28 15:30:15.228200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.612 [2024-10-28 15:30:15.228216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.612 [2024-10-28 15:30:15.228228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.612 [2024-10-28 15:30:15.228261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.612 qpair failed and we were unable to recover it. 00:34:28.612 [2024-10-28 15:30:15.238071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.612 [2024-10-28 15:30:15.238173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.612 [2024-10-28 15:30:15.238198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.612 [2024-10-28 15:30:15.238213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.612 [2024-10-28 15:30:15.238232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.612 [2024-10-28 15:30:15.238264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.612 qpair failed and we were unable to recover it. 00:34:28.612 [2024-10-28 15:30:15.248068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.612 [2024-10-28 15:30:15.248155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.612 [2024-10-28 15:30:15.248182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.612 [2024-10-28 15:30:15.248196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.612 [2024-10-28 15:30:15.248209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.612 [2024-10-28 15:30:15.248240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.612 qpair failed and we were unable to recover it. 00:34:28.612 [2024-10-28 15:30:15.258147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.612 [2024-10-28 15:30:15.258239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.612 [2024-10-28 15:30:15.258264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.612 [2024-10-28 15:30:15.258278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.612 [2024-10-28 15:30:15.258291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.612 [2024-10-28 15:30:15.258321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.612 qpair failed and we were unable to recover it. 00:34:28.612 [2024-10-28 15:30:15.268102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.612 [2024-10-28 15:30:15.268220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.612 [2024-10-28 15:30:15.268246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.612 [2024-10-28 15:30:15.268261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.612 [2024-10-28 15:30:15.268273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.612 [2024-10-28 15:30:15.268303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.612 qpair failed and we were unable to recover it. 00:34:28.612 [2024-10-28 15:30:15.278142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.612 [2024-10-28 15:30:15.278226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.612 [2024-10-28 15:30:15.278252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.612 [2024-10-28 15:30:15.278266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.612 [2024-10-28 15:30:15.278280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.612 [2024-10-28 15:30:15.278310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.612 qpair failed and we were unable to recover it. 00:34:28.612 [2024-10-28 15:30:15.288154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.612 [2024-10-28 15:30:15.288251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.612 [2024-10-28 15:30:15.288275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.612 [2024-10-28 15:30:15.288289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.612 [2024-10-28 15:30:15.288301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.612 [2024-10-28 15:30:15.288331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.612 qpair failed and we were unable to recover it. 00:34:28.612 [2024-10-28 15:30:15.298241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.612 [2024-10-28 15:30:15.298339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.613 [2024-10-28 15:30:15.298363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.613 [2024-10-28 15:30:15.298377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.613 [2024-10-28 15:30:15.298389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.613 [2024-10-28 15:30:15.298421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.613 qpair failed and we were unable to recover it. 00:34:28.613 [2024-10-28 15:30:15.308261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.613 [2024-10-28 15:30:15.308344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.613 [2024-10-28 15:30:15.308371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.613 [2024-10-28 15:30:15.308387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.613 [2024-10-28 15:30:15.308400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.613 [2024-10-28 15:30:15.308431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.613 qpair failed and we were unable to recover it. 00:34:28.613 [2024-10-28 15:30:15.318248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.613 [2024-10-28 15:30:15.318331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.613 [2024-10-28 15:30:15.318356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.613 [2024-10-28 15:30:15.318371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.613 [2024-10-28 15:30:15.318383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.613 [2024-10-28 15:30:15.318413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.613 qpair failed and we were unable to recover it. 00:34:28.613 [2024-10-28 15:30:15.328301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.613 [2024-10-28 15:30:15.328382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.613 [2024-10-28 15:30:15.328410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.613 [2024-10-28 15:30:15.328426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.613 [2024-10-28 15:30:15.328438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.613 [2024-10-28 15:30:15.328468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.613 qpair failed and we were unable to recover it. 00:34:28.613 [2024-10-28 15:30:15.338327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.613 [2024-10-28 15:30:15.338429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.613 [2024-10-28 15:30:15.338455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.613 [2024-10-28 15:30:15.338470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.613 [2024-10-28 15:30:15.338482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.613 [2024-10-28 15:30:15.338512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.613 qpair failed and we were unable to recover it. 00:34:28.613 [2024-10-28 15:30:15.348386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.613 [2024-10-28 15:30:15.348476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.613 [2024-10-28 15:30:15.348501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.613 [2024-10-28 15:30:15.348516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.613 [2024-10-28 15:30:15.348529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.613 [2024-10-28 15:30:15.348560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.613 qpair failed and we were unable to recover it. 00:34:28.613 [2024-10-28 15:30:15.358428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.613 [2024-10-28 15:30:15.358519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.613 [2024-10-28 15:30:15.358545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.613 [2024-10-28 15:30:15.358559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.613 [2024-10-28 15:30:15.358572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.613 [2024-10-28 15:30:15.358603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.613 qpair failed and we were unable to recover it. 00:34:28.613 [2024-10-28 15:30:15.368415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.613 [2024-10-28 15:30:15.368547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.613 [2024-10-28 15:30:15.368573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.613 [2024-10-28 15:30:15.368593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.613 [2024-10-28 15:30:15.368610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.613 [2024-10-28 15:30:15.368641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.613 qpair failed and we were unable to recover it. 00:34:28.613 [2024-10-28 15:30:15.378435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.613 [2024-10-28 15:30:15.378529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.613 [2024-10-28 15:30:15.378553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.613 [2024-10-28 15:30:15.378567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.613 [2024-10-28 15:30:15.378580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.613 [2024-10-28 15:30:15.378611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.613 qpair failed and we were unable to recover it. 00:34:28.613 [2024-10-28 15:30:15.388518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.613 [2024-10-28 15:30:15.388686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.613 [2024-10-28 15:30:15.388741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.613 [2024-10-28 15:30:15.388758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.613 [2024-10-28 15:30:15.388771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.613 [2024-10-28 15:30:15.388805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.613 qpair failed and we were unable to recover it. 00:34:28.613 [2024-10-28 15:30:15.398497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.613 [2024-10-28 15:30:15.398600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.613 [2024-10-28 15:30:15.398625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.613 [2024-10-28 15:30:15.398674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.613 [2024-10-28 15:30:15.398692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.613 [2024-10-28 15:30:15.398726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.613 qpair failed and we were unable to recover it. 00:34:28.613 [2024-10-28 15:30:15.408491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.613 [2024-10-28 15:30:15.408595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.613 [2024-10-28 15:30:15.408621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.613 [2024-10-28 15:30:15.408658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.613 [2024-10-28 15:30:15.408674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.613 [2024-10-28 15:30:15.408713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.613 qpair failed and we were unable to recover it. 00:34:28.613 [2024-10-28 15:30:15.418530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.613 [2024-10-28 15:30:15.418622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.613 [2024-10-28 15:30:15.418668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.613 [2024-10-28 15:30:15.418685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.613 [2024-10-28 15:30:15.418698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.613 [2024-10-28 15:30:15.418731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.613 qpair failed and we were unable to recover it. 00:34:28.613 [2024-10-28 15:30:15.428639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.613 [2024-10-28 15:30:15.428742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.613 [2024-10-28 15:30:15.428770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.613 [2024-10-28 15:30:15.428785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.613 [2024-10-28 15:30:15.428798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.613 [2024-10-28 15:30:15.428830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.613 qpair failed and we were unable to recover it. 00:34:28.613 [2024-10-28 15:30:15.438578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.614 [2024-10-28 15:30:15.438681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.614 [2024-10-28 15:30:15.438708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.614 [2024-10-28 15:30:15.438723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.614 [2024-10-28 15:30:15.438736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.614 [2024-10-28 15:30:15.438767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.614 qpair failed and we were unable to recover it. 00:34:28.614 [2024-10-28 15:30:15.448679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.614 [2024-10-28 15:30:15.448768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.614 [2024-10-28 15:30:15.448792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.614 [2024-10-28 15:30:15.448807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.614 [2024-10-28 15:30:15.448821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.614 [2024-10-28 15:30:15.448852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.614 qpair failed and we were unable to recover it. 00:34:28.614 [2024-10-28 15:30:15.458708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.614 [2024-10-28 15:30:15.458825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.614 [2024-10-28 15:30:15.458853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.614 [2024-10-28 15:30:15.458869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.614 [2024-10-28 15:30:15.458883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.614 [2024-10-28 15:30:15.458915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.614 qpair failed and we were unable to recover it. 00:34:28.614 [2024-10-28 15:30:15.468712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.614 [2024-10-28 15:30:15.468821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.614 [2024-10-28 15:30:15.468845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.614 [2024-10-28 15:30:15.468861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.614 [2024-10-28 15:30:15.468874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.614 [2024-10-28 15:30:15.468907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.614 qpair failed and we were unable to recover it. 00:34:28.873 [2024-10-28 15:30:15.478783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.873 [2024-10-28 15:30:15.478876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.873 [2024-10-28 15:30:15.478905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.873 [2024-10-28 15:30:15.478922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.873 [2024-10-28 15:30:15.478951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.873 [2024-10-28 15:30:15.478983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.873 qpair failed and we were unable to recover it. 00:34:28.873 [2024-10-28 15:30:15.488768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.873 [2024-10-28 15:30:15.488857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.873 [2024-10-28 15:30:15.488887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.873 [2024-10-28 15:30:15.488903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.873 [2024-10-28 15:30:15.488917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.873 [2024-10-28 15:30:15.488963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.873 qpair failed and we were unable to recover it. 00:34:28.873 [2024-10-28 15:30:15.498908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.873 [2024-10-28 15:30:15.499024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.873 [2024-10-28 15:30:15.499048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.873 [2024-10-28 15:30:15.499069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.873 [2024-10-28 15:30:15.499083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.873 [2024-10-28 15:30:15.499115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.873 qpair failed and we were unable to recover it. 00:34:28.873 [2024-10-28 15:30:15.508838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.873 [2024-10-28 15:30:15.508970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.873 [2024-10-28 15:30:15.508995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.873 [2024-10-28 15:30:15.509010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.873 [2024-10-28 15:30:15.509023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.873 [2024-10-28 15:30:15.509055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.873 qpair failed and we were unable to recover it. 00:34:28.873 [2024-10-28 15:30:15.518882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.873 [2024-10-28 15:30:15.519015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.873 [2024-10-28 15:30:15.519043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.873 [2024-10-28 15:30:15.519058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.873 [2024-10-28 15:30:15.519071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.873 [2024-10-28 15:30:15.519101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.873 qpair failed and we were unable to recover it. 00:34:28.873 [2024-10-28 15:30:15.528881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.873 [2024-10-28 15:30:15.528978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.873 [2024-10-28 15:30:15.529002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.873 [2024-10-28 15:30:15.529016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.873 [2024-10-28 15:30:15.529029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.873 [2024-10-28 15:30:15.529060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.873 qpair failed and we were unable to recover it. 00:34:28.873 [2024-10-28 15:30:15.539029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.873 [2024-10-28 15:30:15.539123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.873 [2024-10-28 15:30:15.539147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.873 [2024-10-28 15:30:15.539161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.873 [2024-10-28 15:30:15.539173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.873 [2024-10-28 15:30:15.539210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.873 qpair failed and we were unable to recover it. 00:34:28.873 [2024-10-28 15:30:15.548979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.873 [2024-10-28 15:30:15.549118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.873 [2024-10-28 15:30:15.549144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.873 [2024-10-28 15:30:15.549159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.873 [2024-10-28 15:30:15.549171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.873 [2024-10-28 15:30:15.549202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.873 qpair failed and we were unable to recover it. 00:34:28.873 [2024-10-28 15:30:15.558923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.873 [2024-10-28 15:30:15.559022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.873 [2024-10-28 15:30:15.559047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.874 [2024-10-28 15:30:15.559061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.874 [2024-10-28 15:30:15.559074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.874 [2024-10-28 15:30:15.559106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.874 qpair failed and we were unable to recover it. 00:34:28.874 [2024-10-28 15:30:15.569006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.874 [2024-10-28 15:30:15.569122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.874 [2024-10-28 15:30:15.569148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.874 [2024-10-28 15:30:15.569164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.874 [2024-10-28 15:30:15.569177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.874 [2024-10-28 15:30:15.569215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.874 qpair failed and we were unable to recover it. 00:34:28.874 [2024-10-28 15:30:15.579069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.874 [2024-10-28 15:30:15.579180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.874 [2024-10-28 15:30:15.579206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.874 [2024-10-28 15:30:15.579223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.874 [2024-10-28 15:30:15.579236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.874 [2024-10-28 15:30:15.579268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.874 qpair failed and we were unable to recover it. 00:34:28.874 [2024-10-28 15:30:15.589029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.874 [2024-10-28 15:30:15.589134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.874 [2024-10-28 15:30:15.589160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.874 [2024-10-28 15:30:15.589176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.874 [2024-10-28 15:30:15.589188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.874 [2024-10-28 15:30:15.589218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.874 qpair failed and we were unable to recover it. 00:34:28.874 [2024-10-28 15:30:15.599025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.874 [2024-10-28 15:30:15.599120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.874 [2024-10-28 15:30:15.599146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.874 [2024-10-28 15:30:15.599161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.874 [2024-10-28 15:30:15.599173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.874 [2024-10-28 15:30:15.599204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.874 qpair failed and we were unable to recover it. 00:34:28.874 [2024-10-28 15:30:15.609136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.874 [2024-10-28 15:30:15.609261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.874 [2024-10-28 15:30:15.609288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.874 [2024-10-28 15:30:15.609303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.874 [2024-10-28 15:30:15.609316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.874 [2024-10-28 15:30:15.609347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.874 qpair failed and we were unable to recover it. 00:34:28.874 [2024-10-28 15:30:15.619129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.874 [2024-10-28 15:30:15.619219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.874 [2024-10-28 15:30:15.619244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.874 [2024-10-28 15:30:15.619259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.874 [2024-10-28 15:30:15.619272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.874 [2024-10-28 15:30:15.619313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.874 qpair failed and we were unable to recover it. 00:34:28.874 [2024-10-28 15:30:15.629172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.874 [2024-10-28 15:30:15.629261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.874 [2024-10-28 15:30:15.629290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.874 [2024-10-28 15:30:15.629305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.874 [2024-10-28 15:30:15.629318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.874 [2024-10-28 15:30:15.629349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.874 qpair failed and we were unable to recover it. 00:34:28.874 [2024-10-28 15:30:15.639153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.874 [2024-10-28 15:30:15.639236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.874 [2024-10-28 15:30:15.639261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.874 [2024-10-28 15:30:15.639275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.874 [2024-10-28 15:30:15.639287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.874 [2024-10-28 15:30:15.639318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.874 qpair failed and we were unable to recover it. 00:34:28.874 [2024-10-28 15:30:15.649209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.874 [2024-10-28 15:30:15.649305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.874 [2024-10-28 15:30:15.649331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.874 [2024-10-28 15:30:15.649345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.874 [2024-10-28 15:30:15.649358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.874 [2024-10-28 15:30:15.649388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.874 qpair failed and we were unable to recover it. 00:34:28.874 [2024-10-28 15:30:15.659271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.874 [2024-10-28 15:30:15.659365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.874 [2024-10-28 15:30:15.659388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.874 [2024-10-28 15:30:15.659404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.874 [2024-10-28 15:30:15.659416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.874 [2024-10-28 15:30:15.659447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.874 qpair failed and we were unable to recover it. 00:34:28.874 [2024-10-28 15:30:15.669250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.874 [2024-10-28 15:30:15.669345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.874 [2024-10-28 15:30:15.669370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.874 [2024-10-28 15:30:15.669385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.874 [2024-10-28 15:30:15.669402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.874 [2024-10-28 15:30:15.669443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.874 qpair failed and we were unable to recover it. 00:34:28.874 [2024-10-28 15:30:15.679324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.874 [2024-10-28 15:30:15.679449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.874 [2024-10-28 15:30:15.679476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.874 [2024-10-28 15:30:15.679491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.874 [2024-10-28 15:30:15.679504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.874 [2024-10-28 15:30:15.679535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.874 qpair failed and we were unable to recover it. 00:34:28.874 [2024-10-28 15:30:15.689299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.874 [2024-10-28 15:30:15.689383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.874 [2024-10-28 15:30:15.689410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.874 [2024-10-28 15:30:15.689424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.874 [2024-10-28 15:30:15.689437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.874 [2024-10-28 15:30:15.689467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.874 qpair failed and we were unable to recover it. 00:34:28.874 [2024-10-28 15:30:15.699334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.874 [2024-10-28 15:30:15.699427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.875 [2024-10-28 15:30:15.699453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.875 [2024-10-28 15:30:15.699467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.875 [2024-10-28 15:30:15.699480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.875 [2024-10-28 15:30:15.699511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.875 qpair failed and we were unable to recover it. 00:34:28.875 [2024-10-28 15:30:15.709433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.875 [2024-10-28 15:30:15.709534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.875 [2024-10-28 15:30:15.709557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.875 [2024-10-28 15:30:15.709571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.875 [2024-10-28 15:30:15.709583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.875 [2024-10-28 15:30:15.709614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.875 qpair failed and we were unable to recover it. 00:34:28.875 [2024-10-28 15:30:15.719414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.875 [2024-10-28 15:30:15.719520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.875 [2024-10-28 15:30:15.719545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.875 [2024-10-28 15:30:15.719559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.875 [2024-10-28 15:30:15.719571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.875 [2024-10-28 15:30:15.719601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.875 qpair failed and we were unable to recover it. 00:34:28.875 [2024-10-28 15:30:15.729421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.875 [2024-10-28 15:30:15.729507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.875 [2024-10-28 15:30:15.729534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.875 [2024-10-28 15:30:15.729549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.875 [2024-10-28 15:30:15.729563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:28.875 [2024-10-28 15:30:15.729593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.875 qpair failed and we were unable to recover it. 00:34:29.134 [2024-10-28 15:30:15.739558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.135 [2024-10-28 15:30:15.739678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.135 [2024-10-28 15:30:15.739708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.135 [2024-10-28 15:30:15.739724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.135 [2024-10-28 15:30:15.739737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:29.135 [2024-10-28 15:30:15.739779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:29.135 qpair failed and we were unable to recover it. 00:34:29.135 [2024-10-28 15:30:15.749475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.135 [2024-10-28 15:30:15.749560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.135 [2024-10-28 15:30:15.749588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.135 [2024-10-28 15:30:15.749604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.135 [2024-10-28 15:30:15.749616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:29.135 [2024-10-28 15:30:15.749655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:29.135 qpair failed and we were unable to recover it. 00:34:29.135 [2024-10-28 15:30:15.759551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.135 [2024-10-28 15:30:15.759696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.135 [2024-10-28 15:30:15.759732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.135 [2024-10-28 15:30:15.759750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.135 [2024-10-28 15:30:15.759763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:29.135 [2024-10-28 15:30:15.759796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:29.135 qpair failed and we were unable to recover it. 00:34:29.135 [2024-10-28 15:30:15.769567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.135 [2024-10-28 15:30:15.769680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.135 [2024-10-28 15:30:15.769707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.135 [2024-10-28 15:30:15.769721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.135 [2024-10-28 15:30:15.769734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:29.135 [2024-10-28 15:30:15.769765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:29.135 qpair failed and we were unable to recover it. 00:34:29.135 [2024-10-28 15:30:15.779594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.135 [2024-10-28 15:30:15.779705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.135 [2024-10-28 15:30:15.779731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.135 [2024-10-28 15:30:15.779747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.135 [2024-10-28 15:30:15.779760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:29.135 [2024-10-28 15:30:15.779798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:29.135 qpair failed and we were unable to recover it. 00:34:29.135 [2024-10-28 15:30:15.789622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.135 [2024-10-28 15:30:15.789751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.135 [2024-10-28 15:30:15.789777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.135 [2024-10-28 15:30:15.789792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.135 [2024-10-28 15:30:15.789804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:29.135 [2024-10-28 15:30:15.789848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:29.135 qpair failed and we were unable to recover it. 00:34:29.135 [2024-10-28 15:30:15.799567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.135 [2024-10-28 15:30:15.799671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.135 [2024-10-28 15:30:15.799699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.135 [2024-10-28 15:30:15.799714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.135 [2024-10-28 15:30:15.799733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:29.135 [2024-10-28 15:30:15.799766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:29.135 qpair failed and we were unable to recover it. 00:34:29.135 [2024-10-28 15:30:15.809616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.135 [2024-10-28 15:30:15.809749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.135 [2024-10-28 15:30:15.809776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.135 [2024-10-28 15:30:15.809792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.135 [2024-10-28 15:30:15.809805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:29.135 [2024-10-28 15:30:15.809837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:29.135 qpair failed and we were unable to recover it. 00:34:29.135 [2024-10-28 15:30:15.819759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.135 [2024-10-28 15:30:15.819856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.135 [2024-10-28 15:30:15.819883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.135 [2024-10-28 15:30:15.819899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.135 [2024-10-28 15:30:15.819912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:29.135 [2024-10-28 15:30:15.819959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:29.135 qpair failed and we were unable to recover it. 00:34:29.135 [2024-10-28 15:30:15.829712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.135 [2024-10-28 15:30:15.829811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.135 [2024-10-28 15:30:15.829838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.135 [2024-10-28 15:30:15.829854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.135 [2024-10-28 15:30:15.829867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:29.135 [2024-10-28 15:30:15.829899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:29.135 qpair failed and we were unable to recover it. 00:34:29.135 [2024-10-28 15:30:15.839738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.135 [2024-10-28 15:30:15.839848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.135 [2024-10-28 15:30:15.839875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.135 [2024-10-28 15:30:15.839890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.135 [2024-10-28 15:30:15.839904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:29.135 [2024-10-28 15:30:15.839934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:29.135 qpair failed and we were unable to recover it. 00:34:29.135 [2024-10-28 15:30:15.849796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.135 [2024-10-28 15:30:15.849917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.135 [2024-10-28 15:30:15.849954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.135 [2024-10-28 15:30:15.849985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.135 [2024-10-28 15:30:15.849998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:29.135 [2024-10-28 15:30:15.850029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:29.135 qpair failed and we were unable to recover it. 00:34:29.135 [2024-10-28 15:30:15.859848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.135 [2024-10-28 15:30:15.859974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.135 [2024-10-28 15:30:15.860000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.135 [2024-10-28 15:30:15.860015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.135 [2024-10-28 15:30:15.860028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:29.135 [2024-10-28 15:30:15.860059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:29.135 qpair failed and we were unable to recover it. 00:34:29.135 [2024-10-28 15:30:15.869811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.135 [2024-10-28 15:30:15.869911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.136 [2024-10-28 15:30:15.869938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.136 [2024-10-28 15:30:15.869953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.136 [2024-10-28 15:30:15.869965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:29.136 [2024-10-28 15:30:15.869997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:29.136 qpair failed and we were unable to recover it. 00:34:29.136 [2024-10-28 15:30:15.879874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.136 [2024-10-28 15:30:15.879982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.136 [2024-10-28 15:30:15.880008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.136 [2024-10-28 15:30:15.880023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.136 [2024-10-28 15:30:15.880036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:29.136 [2024-10-28 15:30:15.880067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:29.136 qpair failed and we were unable to recover it. 00:34:29.136 [2024-10-28 15:30:15.889860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.136 [2024-10-28 15:30:15.889955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.136 [2024-10-28 15:30:15.889981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.136 [2024-10-28 15:30:15.889997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.136 [2024-10-28 15:30:15.890010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:29.136 [2024-10-28 15:30:15.890042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:29.136 qpair failed and we were unable to recover it. 00:34:29.136 [2024-10-28 15:30:15.899964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.136 [2024-10-28 15:30:15.900056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.136 [2024-10-28 15:30:15.900081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.136 [2024-10-28 15:30:15.900095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.136 [2024-10-28 15:30:15.900108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:29.136 [2024-10-28 15:30:15.900138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:29.136 qpair failed and we were unable to recover it. 00:34:29.136 [2024-10-28 15:30:15.909942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.136 [2024-10-28 15:30:15.910030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.136 [2024-10-28 15:30:15.910071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.136 [2024-10-28 15:30:15.910087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.136 [2024-10-28 15:30:15.910100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:29.136 [2024-10-28 15:30:15.910132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:29.136 qpair failed and we were unable to recover it. 00:34:29.136 [2024-10-28 15:30:15.919967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.136 [2024-10-28 15:30:15.920082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.136 [2024-10-28 15:30:15.920109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.136 [2024-10-28 15:30:15.920125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.136 [2024-10-28 15:30:15.920138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea44000b90 00:34:29.136 [2024-10-28 15:30:15.920171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:29.136 qpair failed and we were unable to recover it. 00:34:29.136 [2024-10-28 15:30:15.930094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.136 [2024-10-28 15:30:15.930239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.136 [2024-10-28 15:30:15.930284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.136 [2024-10-28 15:30:15.930316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.136 [2024-10-28 15:30:15.930337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e9d570 00:34:29.136 [2024-10-28 15:30:15.930381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.136 qpair failed and we were unable to recover it. 00:34:29.136 [2024-10-28 15:30:15.940130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.136 [2024-10-28 15:30:15.940301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.136 [2024-10-28 15:30:15.940338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.136 [2024-10-28 15:30:15.940359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.136 [2024-10-28 15:30:15.940376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e9d570 00:34:29.136 [2024-10-28 15:30:15.940417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.136 qpair failed and we were unable to recover it. 00:34:29.136 [2024-10-28 15:30:15.940569] nvme_ctrlr.c:4482:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:34:29.136 A controller has encountered a failure and is being reset. 00:34:29.394 Controller properly reset. 00:34:29.395 Initializing NVMe Controllers 00:34:29.395 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:29.395 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:29.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:29.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:29.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:29.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:29.395 Initialization complete. Launching workers. 00:34:29.395 Starting thread on core 1 00:34:29.395 Starting thread on core 2 00:34:29.395 Starting thread on core 3 00:34:29.395 Starting thread on core 0 00:34:29.395 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:29.395 00:34:29.395 real 0m11.268s 00:34:29.395 user 0m20.153s 00:34:29.395 sys 0m5.701s 00:34:29.395 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:29.395 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:29.395 ************************************ 00:34:29.395 END TEST nvmf_target_disconnect_tc2 00:34:29.395 ************************************ 00:34:29.395 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:29.395 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:29.395 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:29.395 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:29.395 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:34:29.395 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:29.395 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:34:29.395 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:29.395 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:29.395 rmmod nvme_tcp 00:34:29.395 rmmod nvme_fabrics 00:34:29.395 rmmod nvme_keyring 00:34:29.395 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:29.395 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:34:29.395 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:34:29.395 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3319861 ']' 00:34:29.395 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3319861 00:34:29.395 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3319861 ']' 00:34:29.395 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 3319861 00:34:29.395 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:34:29.395 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:29.395 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3319861 00:34:29.655 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:34:29.655 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:34:29.655 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3319861' 00:34:29.655 killing process with pid 3319861 00:34:29.655 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 3319861 00:34:29.655 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 3319861 00:34:29.914 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:29.914 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:29.914 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:29.914 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:34:29.914 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:34:29.914 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:29.914 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:34:29.914 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:29.914 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:29.914 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:29.914 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:29.914 15:30:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:32.445 15:30:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:32.445 00:34:32.445 real 0m17.323s 00:34:32.445 user 0m47.671s 00:34:32.445 sys 0m8.460s 00:34:32.445 15:30:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:32.445 15:30:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:32.445 ************************************ 00:34:32.445 END TEST nvmf_target_disconnect 00:34:32.445 ************************************ 00:34:32.445 15:30:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:32.445 00:34:32.445 real 6m29.736s 00:34:32.445 user 13m32.122s 00:34:32.445 sys 1m37.044s 00:34:32.445 15:30:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:32.445 15:30:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.445 ************************************ 00:34:32.445 END TEST nvmf_host 00:34:32.445 ************************************ 00:34:32.445 15:30:18 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:34:32.445 15:30:18 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:34:32.445 15:30:18 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:34:32.445 15:30:18 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:32.445 15:30:18 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:32.445 15:30:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:32.445 ************************************ 00:34:32.445 START TEST nvmf_target_core_interrupt_mode 00:34:32.445 ************************************ 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:34:32.445 * Looking for test storage... 00:34:32.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1689 -- # lcov --version 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:34:32.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.445 --rc genhtml_branch_coverage=1 00:34:32.445 --rc genhtml_function_coverage=1 00:34:32.445 --rc genhtml_legend=1 00:34:32.445 --rc geninfo_all_blocks=1 00:34:32.445 --rc geninfo_unexecuted_blocks=1 00:34:32.445 00:34:32.445 ' 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:34:32.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.445 --rc genhtml_branch_coverage=1 00:34:32.445 --rc genhtml_function_coverage=1 00:34:32.445 --rc genhtml_legend=1 00:34:32.445 --rc geninfo_all_blocks=1 00:34:32.445 --rc geninfo_unexecuted_blocks=1 00:34:32.445 00:34:32.445 ' 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:34:32.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.445 --rc genhtml_branch_coverage=1 00:34:32.445 --rc genhtml_function_coverage=1 00:34:32.445 --rc genhtml_legend=1 00:34:32.445 --rc geninfo_all_blocks=1 00:34:32.445 --rc geninfo_unexecuted_blocks=1 00:34:32.445 00:34:32.445 ' 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:34:32.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.445 --rc genhtml_branch_coverage=1 00:34:32.445 --rc genhtml_function_coverage=1 00:34:32.445 --rc genhtml_legend=1 00:34:32.445 --rc geninfo_all_blocks=1 00:34:32.445 --rc geninfo_unexecuted_blocks=1 00:34:32.445 00:34:32.445 ' 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.445 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.446 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:34:32.446 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.446 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:34:32.446 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:32.446 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:32.446 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:32.446 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:32.446 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:32.446 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:32.446 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:32.446 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:32.446 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:32.446 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:32.446 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:34:32.446 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:34:32.446 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:34:32.446 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:34:32.446 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:32.446 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:32.446 15:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:32.446 ************************************ 00:34:32.446 START TEST nvmf_abort 00:34:32.446 ************************************ 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:34:32.446 * Looking for test storage... 00:34:32.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1689 -- # lcov --version 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:34:32.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.446 --rc genhtml_branch_coverage=1 00:34:32.446 --rc genhtml_function_coverage=1 00:34:32.446 --rc genhtml_legend=1 00:34:32.446 --rc geninfo_all_blocks=1 00:34:32.446 --rc geninfo_unexecuted_blocks=1 00:34:32.446 00:34:32.446 ' 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:34:32.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.446 --rc genhtml_branch_coverage=1 00:34:32.446 --rc genhtml_function_coverage=1 00:34:32.446 --rc genhtml_legend=1 00:34:32.446 --rc geninfo_all_blocks=1 00:34:32.446 --rc geninfo_unexecuted_blocks=1 00:34:32.446 00:34:32.446 ' 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:34:32.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.446 --rc genhtml_branch_coverage=1 00:34:32.446 --rc genhtml_function_coverage=1 00:34:32.446 --rc genhtml_legend=1 00:34:32.446 --rc geninfo_all_blocks=1 00:34:32.446 --rc geninfo_unexecuted_blocks=1 00:34:32.446 00:34:32.446 ' 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:34:32.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.446 --rc genhtml_branch_coverage=1 00:34:32.446 --rc genhtml_function_coverage=1 00:34:32.446 --rc genhtml_legend=1 00:34:32.446 --rc geninfo_all_blocks=1 00:34:32.446 --rc geninfo_unexecuted_blocks=1 00:34:32.446 00:34:32.446 ' 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.446 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.447 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:34:32.447 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.447 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:34:32.447 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:32.447 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:32.705 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:32.705 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:32.705 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:32.705 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:32.705 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:32.705 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:32.705 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:32.705 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:32.705 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:32.705 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:34:32.705 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:34:32.705 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:32.705 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:32.705 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:32.705 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:32.705 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:32.705 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:32.705 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:32.705 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:32.705 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:32.705 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:32.705 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:34:32.705 15:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:35.994 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:34:35.995 Found 0000:84:00.0 (0x8086 - 0x159b) 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:34:35.995 Found 0000:84:00.1 (0x8086 - 0x159b) 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:34:35.995 Found net devices under 0000:84:00.0: cvl_0_0 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:34:35.995 Found net devices under 0000:84:00.1: cvl_0_1 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:35.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:35.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:34:35.995 00:34:35.995 --- 10.0.0.2 ping statistics --- 00:34:35.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:35.995 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:34:35.995 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:35.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:35.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:34:35.995 00:34:35.995 --- 10.0.0.1 ping statistics --- 00:34:35.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:35.995 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3322908 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3322908 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 3322908 ']' 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:35.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:35.996 [2024-10-28 15:30:22.422284] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:35.996 [2024-10-28 15:30:22.423362] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:34:35.996 [2024-10-28 15:30:22.423420] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:35.996 [2024-10-28 15:30:22.503007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:35.996 [2024-10-28 15:30:22.572669] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:35.996 [2024-10-28 15:30:22.572736] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:35.996 [2024-10-28 15:30:22.572753] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:35.996 [2024-10-28 15:30:22.572767] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:35.996 [2024-10-28 15:30:22.572779] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:35.996 [2024-10-28 15:30:22.574537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:35.996 [2024-10-28 15:30:22.574573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:35.996 [2024-10-28 15:30:22.574576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:35.996 [2024-10-28 15:30:22.680348] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:35.996 [2024-10-28 15:30:22.680575] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:35.996 [2024-10-28 15:30:22.680613] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:35.996 [2024-10-28 15:30:22.680852] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:35.996 [2024-10-28 15:30:22.751625] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:35.996 Malloc0 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:35.996 Delay0 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.996 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:35.996 [2024-10-28 15:30:22.859597] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:36.255 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.255 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:36.255 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.255 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:36.255 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.255 15:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:34:36.255 [2024-10-28 15:30:22.931319] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:34:38.155 Initializing NVMe Controllers 00:34:38.155 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:34:38.155 controller IO queue size 128 less than required 00:34:38.155 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:34:38.155 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:34:38.155 Initialization complete. Launching workers. 00:34:38.155 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 26057 00:34:38.155 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 26114, failed to submit 66 00:34:38.155 success 26057, unsuccessful 57, failed 0 00:34:38.155 15:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:38.155 15:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.155 15:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:38.155 15:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.155 15:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:34:38.155 15:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:34:38.155 15:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:38.155 15:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:34:38.155 15:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:38.155 15:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:34:38.155 15:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:38.155 15:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:38.155 rmmod nvme_tcp 00:34:38.155 rmmod nvme_fabrics 00:34:38.416 rmmod nvme_keyring 00:34:38.416 15:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:38.416 15:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:34:38.416 15:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:34:38.416 15:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3322908 ']' 00:34:38.416 15:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3322908 00:34:38.416 15:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 3322908 ']' 00:34:38.416 15:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 3322908 00:34:38.416 15:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:34:38.416 15:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:38.416 15:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3322908 00:34:38.416 15:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:38.416 15:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:38.416 15:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3322908' 00:34:38.416 killing process with pid 3322908 00:34:38.416 15:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 3322908 00:34:38.416 15:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 3322908 00:34:38.986 15:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:38.986 15:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:38.986 15:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:38.986 15:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:34:38.986 15:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:34:38.986 15:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:38.986 15:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:34:38.986 15:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:38.986 15:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:38.986 15:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:38.986 15:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:38.986 15:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:40.894 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:40.894 00:34:40.894 real 0m8.612s 00:34:40.894 user 0m9.580s 00:34:40.894 sys 0m3.810s 00:34:40.894 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:40.894 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:40.894 ************************************ 00:34:40.894 END TEST nvmf_abort 00:34:40.894 ************************************ 00:34:40.894 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:34:40.894 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:40.894 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:40.894 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:40.894 ************************************ 00:34:40.894 START TEST nvmf_ns_hotplug_stress 00:34:40.894 ************************************ 00:34:40.894 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:34:40.894 * Looking for test storage... 00:34:40.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:40.894 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:34:40.894 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1689 -- # lcov --version 00:34:40.894 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:34:41.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.155 --rc genhtml_branch_coverage=1 00:34:41.155 --rc genhtml_function_coverage=1 00:34:41.155 --rc genhtml_legend=1 00:34:41.155 --rc geninfo_all_blocks=1 00:34:41.155 --rc geninfo_unexecuted_blocks=1 00:34:41.155 00:34:41.155 ' 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:34:41.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.155 --rc genhtml_branch_coverage=1 00:34:41.155 --rc genhtml_function_coverage=1 00:34:41.155 --rc genhtml_legend=1 00:34:41.155 --rc geninfo_all_blocks=1 00:34:41.155 --rc geninfo_unexecuted_blocks=1 00:34:41.155 00:34:41.155 ' 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:34:41.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.155 --rc genhtml_branch_coverage=1 00:34:41.155 --rc genhtml_function_coverage=1 00:34:41.155 --rc genhtml_legend=1 00:34:41.155 --rc geninfo_all_blocks=1 00:34:41.155 --rc geninfo_unexecuted_blocks=1 00:34:41.155 00:34:41.155 ' 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:34:41.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.155 --rc genhtml_branch_coverage=1 00:34:41.155 --rc genhtml_function_coverage=1 00:34:41.155 --rc genhtml_legend=1 00:34:41.155 --rc geninfo_all_blocks=1 00:34:41.155 --rc geninfo_unexecuted_blocks=1 00:34:41.155 00:34:41.155 ' 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.155 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:34:41.156 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.156 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:34:41.156 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:41.156 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:41.156 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:41.156 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:41.156 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:41.156 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:41.156 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:41.156 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:41.156 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:41.156 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:41.156 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:41.156 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:34:41.156 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:41.156 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:41.156 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:41.156 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:41.156 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:41.156 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:41.156 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:41.156 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:41.156 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:41.156 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:41.156 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:34:41.156 15:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:34:44.451 Found 0000:84:00.0 (0x8086 - 0x159b) 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:34:44.451 Found 0000:84:00.1 (0x8086 - 0x159b) 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:44.451 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:34:44.452 Found net devices under 0000:84:00.0: cvl_0_0 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:34:44.452 Found net devices under 0000:84:00.1: cvl_0_1 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:44.452 15:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:44.452 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:44.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:44.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:34:44.452 00:34:44.452 --- 10.0.0.2 ping statistics --- 00:34:44.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:44.452 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:34:44.452 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:44.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:44.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:34:44.452 00:34:44.452 --- 10.0.0.1 ping statistics --- 00:34:44.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:44.452 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:34:44.452 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:44.452 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:34:44.452 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:44.452 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:44.452 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:44.452 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:44.452 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:44.452 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:44.452 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:44.452 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:34:44.452 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:44.452 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:44.452 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:44.452 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3325304 00:34:44.452 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:34:44.452 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3325304 00:34:44.452 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 3325304 ']' 00:34:44.452 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:44.452 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:44.452 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:44.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:44.453 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:44.453 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:44.453 [2024-10-28 15:30:31.156093] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:44.453 [2024-10-28 15:30:31.158676] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:34:44.453 [2024-10-28 15:30:31.158802] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:44.715 [2024-10-28 15:30:31.339142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:44.715 [2024-10-28 15:30:31.457573] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:44.715 [2024-10-28 15:30:31.457689] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:44.715 [2024-10-28 15:30:31.457729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:44.715 [2024-10-28 15:30:31.457760] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:44.715 [2024-10-28 15:30:31.457785] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:44.715 [2024-10-28 15:30:31.460884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:44.715 [2024-10-28 15:30:31.460992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:44.715 [2024-10-28 15:30:31.460998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:44.974 [2024-10-28 15:30:31.628714] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:44.974 [2024-10-28 15:30:31.628963] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:44.974 [2024-10-28 15:30:31.628975] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:44.974 [2024-10-28 15:30:31.629301] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:44.974 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:44.974 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:34:44.974 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:44.975 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:44.975 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:44.975 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:44.975 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:34:44.975 15:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:45.235 [2024-10-28 15:30:32.073871] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:45.496 15:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:46.067 15:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:46.637 [2024-10-28 15:30:33.334798] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:46.637 15:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:47.207 15:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:34:48.147 Malloc0 00:34:48.147 15:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:48.770 Delay0 00:34:48.770 15:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:49.339 15:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:34:49.597 NULL1 00:34:49.597 15:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:34:50.163 15:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3325987 00:34:50.163 15:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3325987 00:34:50.163 15:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:50.163 15:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:34:50.420 15:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:50.986 15:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:34:50.986 15:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:34:51.243 true 00:34:51.243 15:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3325987 00:34:51.243 15:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:51.501 15:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:51.759 15:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:34:51.759 15:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:34:52.324 true 00:34:52.324 15:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3325987 00:34:52.324 15:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:53.700 Read completed with error (sct=0, sc=11) 00:34:53.700 15:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:53.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:53.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:53.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:53.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:53.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:53.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:53.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:53.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:53.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:53.957 15:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:34:53.957 15:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:34:54.215 true 00:34:54.215 15:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3325987 00:34:54.215 15:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:55.150 15:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:55.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:55.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:55.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:55.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:55.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:55.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:55.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:55.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:55.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:55.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:55.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:55.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:55.408 15:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:34:55.408 15:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:34:55.666 true 00:34:55.666 15:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3325987 00:34:55.666 15:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:56.599 15:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:56.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:56.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:56.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:56.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:56.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:56.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:56.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:56.600 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:56.858 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:56.858 15:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:34:56.858 15:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:34:57.116 true 00:34:57.116 15:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3325987 00:34:57.116 15:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:58.049 15:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:58.615 15:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:34:58.615 15:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:34:58.873 true 00:34:58.873 15:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3325987 00:34:58.873 15:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:00.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:00.246 15:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:00.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:00.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:00.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:00.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:00.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:00.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:00.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:00.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:00.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:00.504 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:00.504 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:00.504 15:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:35:00.504 15:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:35:01.069 true 00:35:01.069 15:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3325987 00:35:01.069 15:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:01.634 15:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:01.634 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:01.634 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:01.634 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:01.634 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:01.634 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:01.634 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:01.634 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:01.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:01.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:01.892 15:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:35:01.892 15:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:35:02.150 true 00:35:02.150 15:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3325987 00:35:02.150 15:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:03.082 15:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:03.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:03.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:03.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:03.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:03.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:03.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:03.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:03.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:03.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:03.340 15:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:35:03.340 15:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:35:03.596 true 00:35:03.596 15:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3325987 00:35:03.596 15:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:04.161 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:04.161 15:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:04.726 15:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:35:04.726 15:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:35:04.983 true 00:35:04.983 15:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3325987 00:35:04.983 15:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:06.356 15:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:06.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:06.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:06.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:06.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:06.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:06.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:06.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:06.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:06.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:06.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:06.614 15:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:35:06.614 15:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:35:07.179 true 00:35:07.179 15:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3325987 00:35:07.179 15:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:07.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:07.746 15:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:07.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:07.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:07.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:07.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:07.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:08.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:08.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:08.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:08.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:08.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:08.004 15:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:35:08.004 15:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:35:08.570 true 00:35:08.570 15:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3325987 00:35:08.570 15:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:09.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:09.136 15:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:09.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:09.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:09.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:09.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:09.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:09.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:09.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:09.393 15:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:35:09.393 15:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:35:09.958 true 00:35:09.958 15:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3325987 00:35:09.958 15:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:10.216 15:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:10.781 15:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:35:10.781 15:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:35:11.347 true 00:35:11.347 15:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3325987 00:35:11.347 15:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:12.721 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:12.721 15:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:12.721 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:12.980 15:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:35:12.980 15:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:35:13.238 true 00:35:13.238 15:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3325987 00:35:13.238 15:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:14.613 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:14.613 15:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:14.613 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:14.613 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:14.613 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:14.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:14.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:14.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:14.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:14.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:14.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:14.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:15.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:15.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:15.129 15:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:35:15.129 15:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:35:15.696 true 00:35:15.696 15:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3325987 00:35:15.696 15:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:16.328 15:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:16.328 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:16.328 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:16.328 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:16.328 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:16.328 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:16.328 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:16.328 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:16.328 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:16.328 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:16.600 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:16.600 15:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:35:16.600 15:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:35:16.858 true 00:35:16.858 15:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3325987 00:35:16.858 15:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:17.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:17.423 15:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:17.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:17.938 15:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:35:17.938 15:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:35:18.196 true 00:35:18.196 15:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3325987 00:35:18.196 15:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:19.568 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:19.568 15:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:19.568 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:19.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:20.083 15:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:35:20.083 15:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:35:20.648 Initializing NVMe Controllers 00:35:20.648 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:20.648 Controller IO queue size 128, less than required. 00:35:20.648 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:20.648 Controller IO queue size 128, less than required. 00:35:20.648 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:20.648 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:20.648 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:35:20.648 Initialization complete. Launching workers. 00:35:20.648 ======================================================== 00:35:20.648 Latency(us) 00:35:20.648 Device Information : IOPS MiB/s Average min max 00:35:20.648 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4091.34 2.00 21118.85 2501.93 1100770.43 00:35:20.648 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 13717.21 6.70 9307.63 1401.01 367298.35 00:35:20.648 ======================================================== 00:35:20.648 Total : 17808.55 8.70 12021.14 1401.01 1100770.43 00:35:20.648 00:35:20.648 true 00:35:20.648 15:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3325987 00:35:20.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3325987) - No such process 00:35:20.648 15:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3325987 00:35:20.648 15:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:21.213 15:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:21.472 15:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:35:21.472 15:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:35:21.472 15:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:35:21.472 15:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:21.472 15:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:35:22.040 null0 00:35:22.040 15:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:22.040 15:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:22.040 15:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:35:22.975 null1 00:35:22.975 15:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:22.975 15:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:22.975 15:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:35:23.233 null2 00:35:23.233 15:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:23.233 15:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:23.233 15:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:35:23.800 null3 00:35:24.060 15:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:24.060 15:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:24.060 15:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:35:24.631 null4 00:35:24.631 15:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:24.631 15:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:24.631 15:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:35:25.202 null5 00:35:25.202 15:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:25.202 15:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:25.202 15:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:35:25.461 null6 00:35:25.461 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:25.461 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:25.461 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:35:26.402 null7 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:26.402 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:26.403 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:35:26.403 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:26.403 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:35:26.403 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:26.403 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:26.403 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:26.403 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:26.403 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:26.403 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:35:26.403 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:26.403 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:35:26.403 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:26.403 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:26.403 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:26.403 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:26.403 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:26.403 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:35:26.403 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:26.403 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:35:26.403 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:26.403 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:26.403 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3330092 3330093 3330096 3330097 3330099 3330101 3330103 3330105 00:35:26.403 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:26.403 15:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:26.660 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:26.660 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:26.660 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:26.660 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:26.660 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:26.660 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:26.660 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:26.661 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:26.919 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:26.919 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:26.919 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:26.919 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:26.919 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:26.919 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:26.919 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:26.919 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:26.919 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:26.919 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:26.919 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:26.919 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:26.919 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:26.919 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:26.919 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:26.919 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:26.919 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:26.919 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:26.919 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:26.919 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:26.919 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:26.919 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:26.919 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:26.919 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:27.176 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:27.176 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:27.176 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:27.176 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:27.176 15:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:27.434 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:27.434 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:27.434 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:27.434 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:27.434 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:27.434 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:27.434 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:27.434 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:27.434 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:27.434 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:27.434 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:27.434 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:27.434 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:27.434 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:27.434 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:27.434 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:27.435 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:27.435 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:27.772 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:27.772 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:27.772 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:27.772 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:27.772 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:27.772 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:27.772 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:27.772 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:27.772 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:27.772 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:27.772 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:27.773 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:27.773 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:27.773 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:28.038 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:28.038 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:28.038 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:28.038 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:28.038 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:28.038 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:28.038 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:28.038 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:28.038 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:28.038 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:28.038 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:28.038 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:28.038 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:28.038 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:28.038 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:28.038 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:28.038 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:28.038 15:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:28.296 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:28.296 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:28.296 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:28.296 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:28.296 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:28.296 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:28.296 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:28.296 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:28.296 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:28.296 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:28.296 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:28.296 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:28.296 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:28.554 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:28.554 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:28.554 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:28.554 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:28.812 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:28.812 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:28.812 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:28.812 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:28.812 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:28.812 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:28.812 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:28.812 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:28.812 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:28.812 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:28.812 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:28.812 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:28.812 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:28.812 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:28.812 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:29.070 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:29.070 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:29.070 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:29.070 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:29.070 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:29.070 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:29.070 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:29.070 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:29.070 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:29.070 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:29.070 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:29.070 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:29.070 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:29.071 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:29.329 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:29.329 15:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:29.329 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:29.329 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:29.329 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:29.329 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:29.329 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:29.329 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:29.329 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:29.329 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:29.329 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:29.329 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:29.329 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:29.329 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:29.329 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:29.589 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:29.590 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:29.590 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:29.590 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:29.590 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:29.590 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:29.590 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:29.590 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:29.590 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:29.590 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:29.590 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:29.590 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:29.848 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:29.848 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:29.848 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:29.848 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:29.848 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:29.849 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:30.107 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:30.107 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:30.107 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:30.107 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:30.107 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:30.107 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:30.107 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:30.107 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:30.107 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:30.107 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:30.107 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:30.107 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:30.107 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:30.107 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:30.107 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:30.107 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:30.107 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:30.365 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:30.365 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:30.365 15:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:30.365 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:30.365 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:30.365 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:30.365 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:30.365 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:30.365 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:30.365 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:30.365 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:30.365 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:30.365 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:30.365 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:30.624 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:30.624 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:30.624 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:30.624 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:30.624 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:30.624 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:30.624 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:30.624 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:30.624 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:30.624 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:30.624 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:30.624 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:30.624 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:30.624 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:30.624 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:30.882 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:30.883 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:30.883 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:30.883 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:30.883 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:30.883 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:30.883 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:30.883 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:30.883 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:30.883 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:30.883 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:30.883 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:30.883 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:30.883 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:31.141 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:31.141 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:31.141 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:31.141 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:31.141 15:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:31.399 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:31.399 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:31.399 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:31.399 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:31.399 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:31.399 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:31.399 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:31.399 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:31.399 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:31.399 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:31.399 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:31.399 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:31.399 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:31.399 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:31.399 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:31.399 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:31.399 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:31.399 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:31.399 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:31.399 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:31.399 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:31.399 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:31.657 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:31.657 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:31.657 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:31.657 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:31.657 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:31.657 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:31.657 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:31.657 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:31.657 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:31.916 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:31.916 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:31.916 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:31.916 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:31.916 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:31.916 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:31.916 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:31.916 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:31.916 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:31.916 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:31.916 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:31.916 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:31.916 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:31.916 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:31.916 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:31.916 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:31.916 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:32.174 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.174 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.174 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:32.174 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.174 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.174 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:32.174 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:32.174 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.174 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.174 15:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:32.174 15:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:32.174 15:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:32.432 15:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:32.432 15:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:32.432 15:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:32.432 15:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:32.691 15:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.691 15:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.691 15:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.691 15:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.691 15:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.691 15:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.691 15:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:32.691 15:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.691 15:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.691 15:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.691 15:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.691 15:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.691 15:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.691 15:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.691 15:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:33.259 15:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:33.259 15:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:33.259 15:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:35:33.259 15:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:35:33.259 15:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:33.259 15:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:35:33.259 15:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:33.259 15:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:35:33.259 15:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:33.259 15:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:33.259 rmmod nvme_tcp 00:35:33.259 rmmod nvme_fabrics 00:35:33.259 rmmod nvme_keyring 00:35:33.259 15:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:33.259 15:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:35:33.259 15:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:35:33.259 15:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3325304 ']' 00:35:33.259 15:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3325304 00:35:33.259 15:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 3325304 ']' 00:35:33.259 15:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 3325304 00:35:33.259 15:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:35:33.259 15:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:33.259 15:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3325304 00:35:33.517 15:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:33.517 15:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:33.517 15:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3325304' 00:35:33.517 killing process with pid 3325304 00:35:33.517 15:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 3325304 00:35:33.517 15:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 3325304 00:35:33.777 15:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:33.777 15:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:33.777 15:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:33.777 15:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:35:33.777 15:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:35:33.777 15:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:33.777 15:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:35:33.777 15:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:33.777 15:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:33.777 15:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:33.777 15:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:33.777 15:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:35.685 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:35.685 00:35:35.685 real 0m54.848s 00:35:35.685 user 3m31.341s 00:35:35.685 sys 0m29.811s 00:35:35.685 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:35.685 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:35.685 ************************************ 00:35:35.685 END TEST nvmf_ns_hotplug_stress 00:35:35.685 ************************************ 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:35.946 ************************************ 00:35:35.946 START TEST nvmf_delete_subsystem 00:35:35.946 ************************************ 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:35:35.946 * Looking for test storage... 00:35:35.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1689 -- # lcov --version 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:35.946 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:35:35.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:35.946 --rc genhtml_branch_coverage=1 00:35:35.946 --rc genhtml_function_coverage=1 00:35:35.946 --rc genhtml_legend=1 00:35:35.947 --rc geninfo_all_blocks=1 00:35:35.947 --rc geninfo_unexecuted_blocks=1 00:35:35.947 00:35:35.947 ' 00:35:35.947 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:35:35.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:35.947 --rc genhtml_branch_coverage=1 00:35:35.947 --rc genhtml_function_coverage=1 00:35:35.947 --rc genhtml_legend=1 00:35:35.947 --rc geninfo_all_blocks=1 00:35:35.947 --rc geninfo_unexecuted_blocks=1 00:35:35.947 00:35:35.947 ' 00:35:35.947 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:35:35.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:35.947 --rc genhtml_branch_coverage=1 00:35:35.947 --rc genhtml_function_coverage=1 00:35:35.947 --rc genhtml_legend=1 00:35:35.947 --rc geninfo_all_blocks=1 00:35:35.947 --rc geninfo_unexecuted_blocks=1 00:35:35.947 00:35:35.947 ' 00:35:35.947 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:35:35.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:35.947 --rc genhtml_branch_coverage=1 00:35:35.947 --rc genhtml_function_coverage=1 00:35:35.947 --rc genhtml_legend=1 00:35:35.947 --rc geninfo_all_blocks=1 00:35:35.947 --rc geninfo_unexecuted_blocks=1 00:35:35.947 00:35:35.947 ' 00:35:35.947 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:35.947 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:35:35.947 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:35.947 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:35.947 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:35.947 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:35.947 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:35.947 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:35.947 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:35.947 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:35:36.207 15:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:39.502 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:39.502 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:35:39.502 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:39.502 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:39.502 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:39.502 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:39.502 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:39.502 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:35:39.502 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:39.502 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:35:39.502 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:35:39.502 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:35:39.502 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:35:39.502 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:35:39.502 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:35:39.502 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:39.502 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:39.502 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:39.502 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:39.502 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:39.502 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:39.502 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:39.502 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:39.502 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:39.502 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:39.502 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:39.502 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:39.502 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:39.502 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:39.502 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:39.502 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:35:39.503 Found 0000:84:00.0 (0x8086 - 0x159b) 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:35:39.503 Found 0000:84:00.1 (0x8086 - 0x159b) 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:35:39.503 Found net devices under 0000:84:00.0: cvl_0_0 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:35:39.503 Found net devices under 0000:84:00.1: cvl_0_1 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:39.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:39.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:35:39.503 00:35:39.503 --- 10.0.0.2 ping statistics --- 00:35:39.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:39.503 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:39.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:39.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:35:39.503 00:35:39.503 --- 10.0.0.1 ping statistics --- 00:35:39.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:39.503 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:35:39.503 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:39.504 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:35:39.504 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:39.504 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:39.504 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:39.504 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:39.504 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:39.504 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:39.504 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:39.504 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:35:39.504 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:39.504 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:39.504 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:39.504 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3333124 00:35:39.504 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:39.504 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3333124 00:35:39.504 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 3333124 ']' 00:35:39.504 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:39.504 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:39.504 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:39.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:39.504 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:39.504 15:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:39.504 [2024-10-28 15:31:25.977269] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:39.504 [2024-10-28 15:31:25.979890] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:35:39.504 [2024-10-28 15:31:25.980018] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:39.504 [2024-10-28 15:31:26.162647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:39.504 [2024-10-28 15:31:26.273681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:39.504 [2024-10-28 15:31:26.273737] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:39.504 [2024-10-28 15:31:26.273753] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:39.504 [2024-10-28 15:31:26.273768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:39.504 [2024-10-28 15:31:26.273780] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:39.504 [2024-10-28 15:31:26.275373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:39.504 [2024-10-28 15:31:26.275380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:39.765 [2024-10-28 15:31:26.430602] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:39.765 [2024-10-28 15:31:26.430648] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:39.765 [2024-10-28 15:31:26.431241] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:39.765 [2024-10-28 15:31:26.556290] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:39.765 [2024-10-28 15:31:26.576810] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:39.765 NULL1 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:39.765 Delay0 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3333271 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:35:39.765 15:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:35:40.024 [2024-10-28 15:31:26.720091] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:35:41.921 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:41.921 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.921 15:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 starting I/O failed: -6 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 starting I/O failed: -6 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 starting I/O failed: -6 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 starting I/O failed: -6 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 starting I/O failed: -6 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 starting I/O failed: -6 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 starting I/O failed: -6 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 starting I/O failed: -6 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 starting I/O failed: -6 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 starting I/O failed: -6 00:35:42.179 [2024-10-28 15:31:28.852337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcae000cfe0 is same with the state(6) to be set 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 starting I/O failed: -6 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 starting I/O failed: -6 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 starting I/O failed: -6 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 starting I/O failed: -6 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 starting I/O failed: -6 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 starting I/O failed: -6 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 starting I/O failed: -6 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 starting I/O failed: -6 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 starting I/O failed: -6 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 starting I/O failed: -6 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 starting I/O failed: -6 00:35:42.179 Read completed with error (sct=0, sc=8) 00:35:42.179 Write completed with error (sct=0, sc=8) 00:35:42.180 Read completed with error (sct=0, sc=8) 00:35:42.180 Read completed with error (sct=0, sc=8) 00:35:42.180 Read completed with error (sct=0, sc=8) 00:35:42.180 Write completed with error (sct=0, sc=8) 00:35:42.180 Write completed with error (sct=0, sc=8) 00:35:42.180 Read completed with error (sct=0, sc=8) 00:35:42.180 Read completed with error (sct=0, sc=8) 00:35:42.180 Read completed with error (sct=0, sc=8) 00:35:42.180 Write completed with error (sct=0, sc=8) 00:35:42.180 Write completed with error (sct=0, sc=8) 00:35:42.180 starting I/O failed: -6 00:35:42.180 Read completed with error (sct=0, sc=8) 00:35:42.180 Read completed with error (sct=0, sc=8) 00:35:42.180 Write completed with error (sct=0, sc=8) 00:35:42.180 Read completed with error (sct=0, sc=8) 00:35:42.180 Read completed with error (sct=0, sc=8) 00:35:42.180 Write completed with error (sct=0, sc=8) 00:35:42.180 Read completed with error (sct=0, sc=8) 00:35:42.180 Read completed with error (sct=0, sc=8) 00:35:42.180 Read completed with error (sct=0, sc=8) 00:35:42.180 Write completed with error (sct=0, sc=8) 00:35:42.180 Write completed with error (sct=0, sc=8) 00:35:42.180 Read completed with error (sct=0, sc=8) 00:35:42.180 Read completed with error (sct=0, sc=8) 00:35:42.180 [2024-10-28 15:31:28.853241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613680 is same with the state(6) to be set 00:35:42.180 Read completed with error (sct=0, sc=8) 00:35:42.180 Read completed with error (sct=0, sc=8) 00:35:42.180 Read completed with error (sct=0, sc=8) 00:35:42.180 Read completed with error (sct=0, sc=8) 00:35:42.180 Read completed with error (sct=0, sc=8) 00:35:42.180 Write completed with error (sct=0, sc=8) 00:35:42.180 Write completed with error (sct=0, sc=8) 00:35:42.180 Read completed with error (sct=0, sc=8) 00:35:42.180 Write completed with error (sct=0, sc=8) 00:35:42.180 Read completed with error (sct=0, sc=8) 00:35:42.180 Read completed with error (sct=0, sc=8) 00:35:42.180 Read completed with error (sct=0, sc=8) 00:35:42.180 Write completed with error (sct=0, sc=8) 00:35:42.180 Read completed with error (sct=0, sc=8) 00:35:42.180 Write completed with error (sct=0, sc=8) 00:35:42.180 Read completed with error (sct=0, sc=8) 00:35:42.180 Read completed with error (sct=0, sc=8) 00:35:42.180 Read completed with error (sct=0, sc=8) 00:35:43.113 [2024-10-28 15:31:29.823358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6149a0 is same with the state(6) to be set 00:35:43.113 Write completed with error (sct=0, sc=8) 00:35:43.113 Write completed with error (sct=0, sc=8) 00:35:43.113 Read completed with error (sct=0, sc=8) 00:35:43.113 Read completed with error (sct=0, sc=8) 00:35:43.113 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Write completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Write completed with error (sct=0, sc=8) 00:35:43.114 Write completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Write completed with error (sct=0, sc=8) 00:35:43.114 Write completed with error (sct=0, sc=8) 00:35:43.114 Write completed with error (sct=0, sc=8) 00:35:43.114 Write completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Write completed with error (sct=0, sc=8) 00:35:43.114 Write completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 [2024-10-28 15:31:29.854869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613860 is same with the state(6) to be set 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Write completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Write completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Write completed with error (sct=0, sc=8) 00:35:43.114 Write completed with error (sct=0, sc=8) 00:35:43.114 Write completed with error (sct=0, sc=8) 00:35:43.114 Write completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Write completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Write completed with error (sct=0, sc=8) 00:35:43.114 Write completed with error (sct=0, sc=8) 00:35:43.114 Write completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Write completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Write completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 [2024-10-28 15:31:29.855072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6132c0 is same with the state(6) to be set 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Write completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Write completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Write completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Write completed with error (sct=0, sc=8) 00:35:43.114 [2024-10-28 15:31:29.855210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcae000d310 is same with the state(6) to be set 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Write completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Write completed with error (sct=0, sc=8) 00:35:43.114 Write completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Write completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Write completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Write completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 Read completed with error (sct=0, sc=8) 00:35:43.114 [2024-10-28 15:31:29.855874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6134a0 is same with the state(6) to be set 00:35:43.114 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.114 Initializing NVMe Controllers 00:35:43.114 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:43.114 Controller IO queue size 128, less than required. 00:35:43.114 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:43.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:43.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:43.114 Initialization complete. Launching workers. 00:35:43.114 ======================================================== 00:35:43.114 Latency(us) 00:35:43.114 Device Information : IOPS MiB/s Average min max 00:35:43.114 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.49 0.09 961053.28 1906.41 1012658.56 00:35:43.114 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.68 0.08 878398.94 669.51 1014119.44 00:35:43.114 ======================================================== 00:35:43.114 Total : 332.17 0.16 922563.50 669.51 1014119.44 00:35:43.114 00:35:43.114 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:35:43.114 [2024-10-28 15:31:29.856940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6149a0 (9): B 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3333271 00:35:43.114 ad file descriptor 00:35:43.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:35:43.114 15:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:35:43.684 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:35:43.684 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3333271 00:35:43.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3333271) - No such process 00:35:43.684 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3333271 00:35:43.684 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:35:43.684 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3333271 00:35:43.684 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:35:43.684 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:43.684 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:35:43.684 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:43.684 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3333271 00:35:43.684 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:35:43.684 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:43.684 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:43.684 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:43.684 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:43.684 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.684 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:43.684 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.684 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:43.684 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.684 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:43.684 [2024-10-28 15:31:30.376722] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:43.684 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.684 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:43.684 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.684 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:43.684 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.684 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3333663 00:35:43.684 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:35:43.684 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:35:43.684 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3333663 00:35:43.684 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:43.684 [2024-10-28 15:31:30.467883] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:35:44.249 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:44.249 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3333663 00:35:44.249 15:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:44.815 15:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:44.815 15:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3333663 00:35:44.815 15:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:45.073 15:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:45.073 15:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3333663 00:35:45.073 15:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:45.638 15:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:45.639 15:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3333663 00:35:45.639 15:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:46.203 15:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:46.203 15:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3333663 00:35:46.203 15:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:46.768 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:46.768 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3333663 00:35:46.768 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:47.027 Initializing NVMe Controllers 00:35:47.027 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:47.027 Controller IO queue size 128, less than required. 00:35:47.027 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:47.027 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:47.027 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:47.027 Initialization complete. Launching workers. 00:35:47.027 ======================================================== 00:35:47.027 Latency(us) 00:35:47.027 Device Information : IOPS MiB/s Average min max 00:35:47.027 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004220.96 1000184.85 1012148.60 00:35:47.027 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006531.15 1000389.84 1040954.08 00:35:47.027 ======================================================== 00:35:47.027 Total : 256.00 0.12 1005376.05 1000184.85 1040954.08 00:35:47.027 00:35:47.286 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:47.286 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3333663 00:35:47.286 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3333663) - No such process 00:35:47.286 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3333663 00:35:47.286 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:35:47.286 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:35:47.286 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:47.286 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:35:47.286 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:47.286 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:35:47.286 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:47.286 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:47.286 rmmod nvme_tcp 00:35:47.286 rmmod nvme_fabrics 00:35:47.286 rmmod nvme_keyring 00:35:47.286 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:47.286 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:35:47.286 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:35:47.286 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3333124 ']' 00:35:47.286 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3333124 00:35:47.286 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 3333124 ']' 00:35:47.286 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 3333124 00:35:47.286 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:35:47.286 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:47.286 15:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3333124 00:35:47.286 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:47.286 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:47.286 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3333124' 00:35:47.286 killing process with pid 3333124 00:35:47.286 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 3333124 00:35:47.286 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 3333124 00:35:47.545 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:47.545 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:47.545 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:47.545 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:35:47.545 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:35:47.545 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:47.545 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:35:47.545 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:47.545 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:47.545 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:47.545 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:47.545 15:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:50.088 00:35:50.088 real 0m13.834s 00:35:50.088 user 0m25.670s 00:35:50.088 sys 0m4.764s 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:50.088 ************************************ 00:35:50.088 END TEST nvmf_delete_subsystem 00:35:50.088 ************************************ 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:50.088 ************************************ 00:35:50.088 START TEST nvmf_host_management 00:35:50.088 ************************************ 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:35:50.088 * Looking for test storage... 00:35:50.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1689 -- # lcov --version 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:35:50.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.088 --rc genhtml_branch_coverage=1 00:35:50.088 --rc genhtml_function_coverage=1 00:35:50.088 --rc genhtml_legend=1 00:35:50.088 --rc geninfo_all_blocks=1 00:35:50.088 --rc geninfo_unexecuted_blocks=1 00:35:50.088 00:35:50.088 ' 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:35:50.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.088 --rc genhtml_branch_coverage=1 00:35:50.088 --rc genhtml_function_coverage=1 00:35:50.088 --rc genhtml_legend=1 00:35:50.088 --rc geninfo_all_blocks=1 00:35:50.088 --rc geninfo_unexecuted_blocks=1 00:35:50.088 00:35:50.088 ' 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:35:50.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.088 --rc genhtml_branch_coverage=1 00:35:50.088 --rc genhtml_function_coverage=1 00:35:50.088 --rc genhtml_legend=1 00:35:50.088 --rc geninfo_all_blocks=1 00:35:50.088 --rc geninfo_unexecuted_blocks=1 00:35:50.088 00:35:50.088 ' 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:35:50.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.088 --rc genhtml_branch_coverage=1 00:35:50.088 --rc genhtml_function_coverage=1 00:35:50.088 --rc genhtml_legend=1 00:35:50.088 --rc geninfo_all_blocks=1 00:35:50.088 --rc geninfo_unexecuted_blocks=1 00:35:50.088 00:35:50.088 ' 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:50.088 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:35:50.089 15:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:35:53.382 Found 0000:84:00.0 (0x8086 - 0x159b) 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:35:53.382 Found 0000:84:00.1 (0x8086 - 0x159b) 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:35:53.382 Found net devices under 0000:84:00.0: cvl_0_0 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:53.382 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:35:53.383 Found net devices under 0000:84:00.1: cvl_0_1 00:35:53.383 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:53.383 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:53.383 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:35:53.383 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:53.383 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:53.383 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:53.383 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:53.383 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:53.383 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:53.383 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:53.383 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:53.383 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:53.383 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:53.383 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:53.383 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:53.383 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:53.383 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:53.383 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:53.383 15:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:53.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:53.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:35:53.383 00:35:53.383 --- 10.0.0.2 ping statistics --- 00:35:53.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:53.383 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:53.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:53.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:35:53.383 00:35:53.383 --- 10.0.0.1 ping statistics --- 00:35:53.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:53.383 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3336144 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3336144 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3336144 ']' 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:53.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:53.383 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:53.641 [2024-10-28 15:31:40.311699] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:53.641 [2024-10-28 15:31:40.314502] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:35:53.642 [2024-10-28 15:31:40.314628] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:53.642 [2024-10-28 15:31:40.488335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:53.901 [2024-10-28 15:31:40.611999] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:53.901 [2024-10-28 15:31:40.612110] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:53.901 [2024-10-28 15:31:40.612147] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:53.901 [2024-10-28 15:31:40.612180] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:53.901 [2024-10-28 15:31:40.612206] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:53.901 [2024-10-28 15:31:40.615678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:53.901 [2024-10-28 15:31:40.615708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:53.901 [2024-10-28 15:31:40.615760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:53.901 [2024-10-28 15:31:40.615763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:54.159 [2024-10-28 15:31:40.780839] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:54.159 [2024-10-28 15:31:40.781052] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:54.159 [2024-10-28 15:31:40.781358] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:54.159 [2024-10-28 15:31:40.782243] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:54.159 [2024-10-28 15:31:40.782669] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:54.159 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:54.159 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:35:54.159 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:54.159 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:54.159 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:54.159 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:54.159 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:54.159 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.159 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:54.159 [2024-10-28 15:31:40.880965] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:54.159 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.159 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:35:54.159 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:54.159 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:54.159 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:35:54.159 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:35:54.159 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:35:54.159 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.159 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:54.159 Malloc0 00:35:54.159 [2024-10-28 15:31:40.972934] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:54.159 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.159 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:35:54.159 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:54.159 15:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:54.159 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3336316 00:35:54.159 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3336316 /var/tmp/bdevperf.sock 00:35:54.159 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3336316 ']' 00:35:54.159 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:54.159 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:35:54.159 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:35:54.159 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:54.159 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:54.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:54.159 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:35:54.159 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:54.159 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:35:54.159 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:54.159 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:54.159 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:54.159 { 00:35:54.159 "params": { 00:35:54.159 "name": "Nvme$subsystem", 00:35:54.159 "trtype": "$TEST_TRANSPORT", 00:35:54.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:54.159 "adrfam": "ipv4", 00:35:54.159 "trsvcid": "$NVMF_PORT", 00:35:54.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:54.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:54.159 "hdgst": ${hdgst:-false}, 00:35:54.159 "ddgst": ${ddgst:-false} 00:35:54.159 }, 00:35:54.159 "method": "bdev_nvme_attach_controller" 00:35:54.159 } 00:35:54.159 EOF 00:35:54.159 )") 00:35:54.160 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:35:54.160 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:35:54.417 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:35:54.417 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:54.417 "params": { 00:35:54.417 "name": "Nvme0", 00:35:54.417 "trtype": "tcp", 00:35:54.417 "traddr": "10.0.0.2", 00:35:54.417 "adrfam": "ipv4", 00:35:54.417 "trsvcid": "4420", 00:35:54.417 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:54.418 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:54.418 "hdgst": false, 00:35:54.418 "ddgst": false 00:35:54.418 }, 00:35:54.418 "method": "bdev_nvme_attach_controller" 00:35:54.418 }' 00:35:54.418 [2024-10-28 15:31:41.079527] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:35:54.418 [2024-10-28 15:31:41.079625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3336316 ] 00:35:54.418 [2024-10-28 15:31:41.156576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:54.418 [2024-10-28 15:31:41.218968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:54.676 Running I/O for 10 seconds... 00:35:54.936 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:54.936 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:35:54.936 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:35:54.936 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.936 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:54.936 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.936 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:54.936 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:35:54.936 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:35:54.936 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:35:54.936 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:35:54.936 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:35:54.936 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:35:54.936 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:35:54.936 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:35:54.936 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:35:54.936 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.936 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:54.936 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.936 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=195 00:35:54.936 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 195 -ge 100 ']' 00:35:54.936 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:35:54.936 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:35:54.936 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:35:54.936 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:35:54.936 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.936 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:54.936 [2024-10-28 15:31:41.637069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.936 [2024-10-28 15:31:41.637129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.936 [2024-10-28 15:31:41.637159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.936 [2024-10-28 15:31:41.637191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.936 [2024-10-28 15:31:41.637208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.936 [2024-10-28 15:31:41.637223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.936 [2024-10-28 15:31:41.637238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.936 [2024-10-28 15:31:41.637253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.936 [2024-10-28 15:31:41.637268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.936 [2024-10-28 15:31:41.637282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.936 [2024-10-28 15:31:41.637297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.936 [2024-10-28 15:31:41.637311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.936 [2024-10-28 15:31:41.637327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.936 [2024-10-28 15:31:41.637341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.936 [2024-10-28 15:31:41.637356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.936 [2024-10-28 15:31:41.637370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.936 [2024-10-28 15:31:41.637385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.936 [2024-10-28 15:31:41.637400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.936 [2024-10-28 15:31:41.637415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.936 [2024-10-28 15:31:41.637429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.637445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.637458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.637474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.637488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.637503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.637531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.637547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.637562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.637578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.637592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.637608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.637622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.637662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.637679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.637696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.637711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.637727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.637742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.637758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.637773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.637789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.637803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.637819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.637834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.637850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.637865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.637882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.637896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.637913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.637931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.637951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.637981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.637998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.638013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.638028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.638042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.638057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.638071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.638086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.638100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.638115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.638129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.638145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.638159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.638174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.638188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.638203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.638216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.638232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.638246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.638261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.638275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.638290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.638304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.638320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.638337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.638353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.638366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.638382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.638395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.638411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.638424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.638440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.638455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.638472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.638486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.638502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.638516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.638542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.638556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.638572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.638585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.638601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.638615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.638630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.638645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.937 [2024-10-28 15:31:41.638688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.937 [2024-10-28 15:31:41.638704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.938 [2024-10-28 15:31:41.638720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.938 [2024-10-28 15:31:41.638734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.938 [2024-10-28 15:31:41.638754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.938 [2024-10-28 15:31:41.638770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.938 [2024-10-28 15:31:41.638786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.938 [2024-10-28 15:31:41.638800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.938 [2024-10-28 15:31:41.638816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.938 [2024-10-28 15:31:41.638830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.938 [2024-10-28 15:31:41.638846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.938 [2024-10-28 15:31:41.638860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.938 [2024-10-28 15:31:41.638877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.938 [2024-10-28 15:31:41.638891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.938 [2024-10-28 15:31:41.638907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.938 [2024-10-28 15:31:41.638921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.938 [2024-10-28 15:31:41.638937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.938 [2024-10-28 15:31:41.638951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.938 [2024-10-28 15:31:41.638982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.938 [2024-10-28 15:31:41.638997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.938 [2024-10-28 15:31:41.639012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.938 [2024-10-28 15:31:41.639026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.938 [2024-10-28 15:31:41.639042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.938 [2024-10-28 15:31:41.639055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.938 [2024-10-28 15:31:41.639070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.938 [2024-10-28 15:31:41.639084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.938 [2024-10-28 15:31:41.639100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.938 [2024-10-28 15:31:41.639114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.938 [2024-10-28 15:31:41.639129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.938 [2024-10-28 15:31:41.639147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.938 [2024-10-28 15:31:41.639162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.938 [2024-10-28 15:31:41.639176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.938 [2024-10-28 15:31:41.640442] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:54.938 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.938 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:35:54.938 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.938 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:54.938 task offset: 37120 on job bdev=Nvme0n1 fails 00:35:54.938 00:35:54.938 Latency(us) 00:35:54.938 [2024-10-28T14:31:41.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:54.938 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:54.938 Job: Nvme0n1 ended in about 0.19 seconds with error 00:35:54.938 Verification LBA range: start 0x0 length 0x400 00:35:54.938 Nvme0n1 : 0.19 1345.55 84.10 336.39 0.00 36096.80 2730.67 34175.81 00:35:54.938 [2024-10-28T14:31:41.805Z] =================================================================================================================== 00:35:54.938 [2024-10-28T14:31:41.805Z] Total : 1345.55 84.10 336.39 0.00 36096.80 2730.67 34175.81 00:35:54.938 [2024-10-28 15:31:41.643337] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:35:54.938 [2024-10-28 15:31:41.643375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2354040 (9): Bad file descriptor 00:35:54.938 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.938 15:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:35:54.938 [2024-10-28 15:31:41.688992] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:35:55.869 15:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3336316 00:35:55.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3336316) - No such process 00:35:55.870 15:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:35:55.870 15:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:35:55.870 15:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:35:55.870 15:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:35:55.870 15:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:35:55.870 15:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:35:55.870 15:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:55.870 15:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:55.870 { 00:35:55.870 "params": { 00:35:55.870 "name": "Nvme$subsystem", 00:35:55.870 "trtype": "$TEST_TRANSPORT", 00:35:55.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:55.870 "adrfam": "ipv4", 00:35:55.870 "trsvcid": "$NVMF_PORT", 00:35:55.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:55.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:55.870 "hdgst": ${hdgst:-false}, 00:35:55.870 "ddgst": ${ddgst:-false} 00:35:55.870 }, 00:35:55.870 "method": "bdev_nvme_attach_controller" 00:35:55.870 } 00:35:55.870 EOF 00:35:55.870 )") 00:35:55.870 15:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:35:55.870 15:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:35:55.870 15:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:35:55.870 15:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:55.870 "params": { 00:35:55.870 "name": "Nvme0", 00:35:55.870 "trtype": "tcp", 00:35:55.870 "traddr": "10.0.0.2", 00:35:55.870 "adrfam": "ipv4", 00:35:55.870 "trsvcid": "4420", 00:35:55.870 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:55.870 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:55.870 "hdgst": false, 00:35:55.870 "ddgst": false 00:35:55.870 }, 00:35:55.870 "method": "bdev_nvme_attach_controller" 00:35:55.870 }' 00:35:55.870 [2024-10-28 15:31:42.709918] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:35:55.870 [2024-10-28 15:31:42.710039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3336457 ] 00:35:56.127 [2024-10-28 15:31:42.791499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:56.127 [2024-10-28 15:31:42.852579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:56.384 Running I/O for 1 seconds... 00:35:57.314 1536.00 IOPS, 96.00 MiB/s 00:35:57.314 Latency(us) 00:35:57.314 [2024-10-28T14:31:44.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:57.314 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:57.314 Verification LBA range: start 0x0 length 0x400 00:35:57.314 Nvme0n1 : 1.01 1580.45 98.78 0.00 0.00 39844.17 6505.05 34758.35 00:35:57.314 [2024-10-28T14:31:44.181Z] =================================================================================================================== 00:35:57.314 [2024-10-28T14:31:44.181Z] Total : 1580.45 98.78 0.00 0.00 39844.17 6505.05 34758.35 00:35:57.570 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:35:57.570 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:35:57.570 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:35:57.570 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:35:57.570 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:35:57.570 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:57.570 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:35:57.570 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:57.570 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:35:57.570 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:57.570 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:57.570 rmmod nvme_tcp 00:35:57.570 rmmod nvme_fabrics 00:35:57.570 rmmod nvme_keyring 00:35:57.570 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:57.570 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:35:57.570 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:35:57.570 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3336144 ']' 00:35:57.570 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3336144 00:35:57.570 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 3336144 ']' 00:35:57.570 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 3336144 00:35:57.570 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:35:57.570 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:57.571 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3336144 00:35:57.571 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:57.571 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:57.571 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3336144' 00:35:57.571 killing process with pid 3336144 00:35:57.571 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 3336144 00:35:57.571 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 3336144 00:35:58.137 [2024-10-28 15:31:44.756862] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:35:58.137 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:58.137 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:58.137 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:58.137 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:35:58.137 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:35:58.137 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:58.137 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:35:58.137 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:58.137 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:58.137 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:58.137 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:58.137 15:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:00.042 15:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:00.042 15:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:36:00.042 00:36:00.042 real 0m10.366s 00:36:00.042 user 0m17.694s 00:36:00.042 sys 0m4.896s 00:36:00.042 15:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:00.042 15:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:00.042 ************************************ 00:36:00.042 END TEST nvmf_host_management 00:36:00.042 ************************************ 00:36:00.303 15:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:36:00.303 15:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:00.303 15:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:00.303 15:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:00.303 ************************************ 00:36:00.303 START TEST nvmf_lvol 00:36:00.303 ************************************ 00:36:00.303 15:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:36:00.303 * Looking for test storage... 00:36:00.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:00.303 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:36:00.303 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1689 -- # lcov --version 00:36:00.303 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:36:00.563 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:36:00.563 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:00.563 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:00.563 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:00.563 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:36:00.563 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:36:00.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.564 --rc genhtml_branch_coverage=1 00:36:00.564 --rc genhtml_function_coverage=1 00:36:00.564 --rc genhtml_legend=1 00:36:00.564 --rc geninfo_all_blocks=1 00:36:00.564 --rc geninfo_unexecuted_blocks=1 00:36:00.564 00:36:00.564 ' 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:36:00.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.564 --rc genhtml_branch_coverage=1 00:36:00.564 --rc genhtml_function_coverage=1 00:36:00.564 --rc genhtml_legend=1 00:36:00.564 --rc geninfo_all_blocks=1 00:36:00.564 --rc geninfo_unexecuted_blocks=1 00:36:00.564 00:36:00.564 ' 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:36:00.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.564 --rc genhtml_branch_coverage=1 00:36:00.564 --rc genhtml_function_coverage=1 00:36:00.564 --rc genhtml_legend=1 00:36:00.564 --rc geninfo_all_blocks=1 00:36:00.564 --rc geninfo_unexecuted_blocks=1 00:36:00.564 00:36:00.564 ' 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:36:00.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.564 --rc genhtml_branch_coverage=1 00:36:00.564 --rc genhtml_function_coverage=1 00:36:00.564 --rc genhtml_legend=1 00:36:00.564 --rc geninfo_all_blocks=1 00:36:00.564 --rc geninfo_unexecuted_blocks=1 00:36:00.564 00:36:00.564 ' 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.564 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:36:00.565 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.565 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:36:00.565 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:00.565 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:00.565 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:00.565 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:00.565 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:00.565 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:00.565 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:00.565 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:00.565 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:00.565 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:00.565 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:00.565 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:00.565 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:36:00.565 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:36:00.565 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:00.565 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:36:00.565 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:00.565 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:00.565 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:00.565 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:00.565 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:00.565 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:00.565 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:00.565 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:00.565 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:00.565 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:00.565 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:36:00.565 15:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:36:03.858 Found 0000:84:00.0 (0x8086 - 0x159b) 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:36:03.858 Found 0000:84:00.1 (0x8086 - 0x159b) 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:36:03.858 Found net devices under 0000:84:00.0: cvl_0_0 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:36:03.858 Found net devices under 0000:84:00.1: cvl_0_1 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:03.858 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:03.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:03.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:36:03.859 00:36:03.859 --- 10.0.0.2 ping statistics --- 00:36:03.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:03.859 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:03.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:03.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:36:03.859 00:36:03.859 --- 10.0.0.1 ping statistics --- 00:36:03.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:03.859 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3338800 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3338800 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 3338800 ']' 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:03.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:03.859 15:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:03.859 [2024-10-28 15:31:50.350378] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:03.859 [2024-10-28 15:31:50.351946] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:36:03.859 [2024-10-28 15:31:50.352022] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:03.859 [2024-10-28 15:31:50.492265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:03.859 [2024-10-28 15:31:50.609362] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:03.859 [2024-10-28 15:31:50.609499] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:03.859 [2024-10-28 15:31:50.609536] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:03.859 [2024-10-28 15:31:50.609567] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:03.859 [2024-10-28 15:31:50.609594] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:03.859 [2024-10-28 15:31:50.612837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:03.859 [2024-10-28 15:31:50.612902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:03.859 [2024-10-28 15:31:50.612912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:04.120 [2024-10-28 15:31:50.790438] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:04.120 [2024-10-28 15:31:50.790929] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:04.120 [2024-10-28 15:31:50.790974] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:04.120 [2024-10-28 15:31:50.791577] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:05.061 15:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:05.061 15:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:36:05.061 15:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:05.061 15:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:05.061 15:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:05.061 15:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:05.061 15:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:06.003 [2024-10-28 15:31:52.502463] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:06.003 15:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:06.576 15:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:36:06.576 15:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:06.837 15:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:36:06.837 15:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:36:07.781 15:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:36:08.352 15:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=09475592-710a-4635-a8e7-28f607c481bd 00:36:08.352 15:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 09475592-710a-4635-a8e7-28f607c481bd lvol 20 00:36:08.613 15:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=96fbcbec-16d2-4239-a524-8394293430fe 00:36:08.613 15:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:09.554 15:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 96fbcbec-16d2-4239-a524-8394293430fe 00:36:10.125 15:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:10.694 [2024-10-28 15:31:57.394749] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:10.694 15:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:11.263 15:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3339620 00:36:11.263 15:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:36:11.263 15:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:36:12.640 15:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 96fbcbec-16d2-4239-a524-8394293430fe MY_SNAPSHOT 00:36:12.898 15:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9deaa72c-7cb5-4bf2-a34e-2d3670090e2c 00:36:12.898 15:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 96fbcbec-16d2-4239-a524-8394293430fe 30 00:36:13.156 15:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 9deaa72c-7cb5-4bf2-a34e-2d3670090e2c MY_CLONE 00:36:13.721 15:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=75546e4c-48fe-4460-9051-0928a5e41b9e 00:36:13.721 15:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 75546e4c-48fe-4460-9051-0928a5e41b9e 00:36:14.654 15:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3339620 00:36:22.761 Initializing NVMe Controllers 00:36:22.761 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:22.761 Controller IO queue size 128, less than required. 00:36:22.761 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:22.761 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:36:22.761 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:36:22.761 Initialization complete. Launching workers. 00:36:22.761 ======================================================== 00:36:22.761 Latency(us) 00:36:22.761 Device Information : IOPS MiB/s Average min max 00:36:22.761 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10567.50 41.28 12112.72 628.46 66607.35 00:36:22.761 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10489.20 40.97 12208.24 6106.13 66326.41 00:36:22.761 ======================================================== 00:36:22.761 Total : 21056.70 82.25 12160.30 628.46 66607.35 00:36:22.761 00:36:22.761 15:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:22.761 15:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 96fbcbec-16d2-4239-a524-8394293430fe 00:36:23.021 15:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 09475592-710a-4635-a8e7-28f607c481bd 00:36:23.598 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:36:23.598 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:36:23.598 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:36:23.598 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:23.598 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:36:23.598 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:23.598 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:36:23.598 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:23.598 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:23.598 rmmod nvme_tcp 00:36:23.598 rmmod nvme_fabrics 00:36:23.598 rmmod nvme_keyring 00:36:23.598 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:23.598 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:36:23.598 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:36:23.598 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3338800 ']' 00:36:23.598 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3338800 00:36:23.598 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 3338800 ']' 00:36:23.598 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 3338800 00:36:23.598 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:36:23.598 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:23.598 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3338800 00:36:23.598 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:23.598 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:23.598 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3338800' 00:36:23.598 killing process with pid 3338800 00:36:23.598 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 3338800 00:36:23.598 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 3338800 00:36:24.194 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:24.194 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:24.194 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:24.194 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:36:24.194 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:36:24.194 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:24.194 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:36:24.194 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:24.194 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:24.194 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:24.194 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:24.194 15:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:26.101 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:26.101 00:36:26.101 real 0m25.917s 00:36:26.101 user 1m6.973s 00:36:26.101 sys 0m10.284s 00:36:26.101 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:26.101 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:26.101 ************************************ 00:36:26.101 END TEST nvmf_lvol 00:36:26.101 ************************************ 00:36:26.101 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:36:26.101 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:26.101 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:26.101 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:26.101 ************************************ 00:36:26.101 START TEST nvmf_lvs_grow 00:36:26.101 ************************************ 00:36:26.101 15:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:36:26.360 * Looking for test storage... 00:36:26.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1689 -- # lcov --version 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:36:26.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:26.360 --rc genhtml_branch_coverage=1 00:36:26.360 --rc genhtml_function_coverage=1 00:36:26.360 --rc genhtml_legend=1 00:36:26.360 --rc geninfo_all_blocks=1 00:36:26.360 --rc geninfo_unexecuted_blocks=1 00:36:26.360 00:36:26.360 ' 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:36:26.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:26.360 --rc genhtml_branch_coverage=1 00:36:26.360 --rc genhtml_function_coverage=1 00:36:26.360 --rc genhtml_legend=1 00:36:26.360 --rc geninfo_all_blocks=1 00:36:26.360 --rc geninfo_unexecuted_blocks=1 00:36:26.360 00:36:26.360 ' 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:36:26.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:26.360 --rc genhtml_branch_coverage=1 00:36:26.360 --rc genhtml_function_coverage=1 00:36:26.360 --rc genhtml_legend=1 00:36:26.360 --rc geninfo_all_blocks=1 00:36:26.360 --rc geninfo_unexecuted_blocks=1 00:36:26.360 00:36:26.360 ' 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:36:26.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:26.360 --rc genhtml_branch_coverage=1 00:36:26.360 --rc genhtml_function_coverage=1 00:36:26.360 --rc genhtml_legend=1 00:36:26.360 --rc geninfo_all_blocks=1 00:36:26.360 --rc geninfo_unexecuted_blocks=1 00:36:26.360 00:36:26.360 ' 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.360 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.361 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.361 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:36:26.361 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.361 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:36:26.361 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:26.361 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:26.361 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:26.361 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:26.361 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:26.361 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:26.361 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:26.361 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:26.361 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:26.361 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:26.361 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:26.361 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:36:26.361 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:36:26.361 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:26.361 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:26.361 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:26.361 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:26.361 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:26.361 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:26.361 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:26.361 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:26.361 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:26.361 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:26.361 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:36:26.361 15:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:36:29.647 Found 0000:84:00.0 (0x8086 - 0x159b) 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:36:29.647 Found 0000:84:00.1 (0x8086 - 0x159b) 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:36:29.647 Found net devices under 0000:84:00.0: cvl_0_0 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:36:29.647 Found net devices under 0000:84:00.1: cvl_0_1 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:29.647 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:29.648 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:29.648 15:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:29.648 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:29.648 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:29.648 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:29.648 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:29.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:29.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:36:29.648 00:36:29.648 --- 10.0.0.2 ping statistics --- 00:36:29.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:29.648 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:36:29.648 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:29.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:29.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:36:29.648 00:36:29.648 --- 10.0.0.1 ping statistics --- 00:36:29.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:29.648 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:36:29.648 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:29.648 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:36:29.648 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:29.648 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:29.648 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:29.648 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:29.648 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:29.648 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:29.648 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:29.648 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:36:29.648 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:29.648 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:29.648 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:29.648 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3343139 00:36:29.648 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:36:29.648 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3343139 00:36:29.648 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 3343139 ']' 00:36:29.648 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:29.648 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:29.648 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:29.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:29.648 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:29.648 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:29.648 [2024-10-28 15:32:16.146705] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:29.648 [2024-10-28 15:32:16.148003] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:36:29.648 [2024-10-28 15:32:16.148072] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:29.648 [2024-10-28 15:32:16.287185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:29.648 [2024-10-28 15:32:16.407713] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:29.648 [2024-10-28 15:32:16.407825] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:29.648 [2024-10-28 15:32:16.407860] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:29.648 [2024-10-28 15:32:16.407899] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:29.648 [2024-10-28 15:32:16.407924] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:29.648 [2024-10-28 15:32:16.409247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:29.907 [2024-10-28 15:32:16.585694] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:29.907 [2024-10-28 15:32:16.586458] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:30.167 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:30.167 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:36:30.167 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:30.167 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:30.167 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:30.167 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:30.167 15:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:30.736 [2024-10-28 15:32:17.346522] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:30.736 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:36:30.736 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:30.736 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:30.736 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:30.736 ************************************ 00:36:30.736 START TEST lvs_grow_clean 00:36:30.736 ************************************ 00:36:30.736 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:36:30.736 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:36:30.736 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:36:30.736 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:36:30.736 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:36:30.736 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:36:30.736 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:36:30.736 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:30.736 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:30.736 15:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:31.307 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:36:31.307 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:36:32.247 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1610a9df-bc46-4a55-a83f-046c93656822 00:36:32.247 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1610a9df-bc46-4a55-a83f-046c93656822 00:36:32.247 15:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:36:32.819 15:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:36:32.819 15:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:36:32.819 15:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1610a9df-bc46-4a55-a83f-046c93656822 lvol 150 00:36:33.388 15:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0b79e2e6-89f8-4f99-bdf3-3a8e6a28fd8d 00:36:33.388 15:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:33.388 15:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:36:34.328 [2024-10-28 15:32:20.834300] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:36:34.328 [2024-10-28 15:32:20.834495] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:36:34.328 true 00:36:34.328 15:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1610a9df-bc46-4a55-a83f-046c93656822 00:36:34.328 15:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:36:34.328 15:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:36:34.328 15:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:35.269 15:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0b79e2e6-89f8-4f99-bdf3-3a8e6a28fd8d 00:36:35.840 15:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:36.410 [2024-10-28 15:32:23.222945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:36.410 15:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:37.397 15:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3343982 00:36:37.397 15:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:36:37.397 15:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:37.397 15:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3343982 /var/tmp/bdevperf.sock 00:36:37.397 15:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 3343982 ']' 00:36:37.397 15:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:37.397 15:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:37.397 15:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:37.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:37.397 15:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:37.397 15:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:36:37.397 [2024-10-28 15:32:24.060337] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:36:37.397 [2024-10-28 15:32:24.060508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3343982 ] 00:36:37.397 [2024-10-28 15:32:24.202809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:37.656 [2024-10-28 15:32:24.309021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:37.915 15:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:37.915 15:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:36:37.915 15:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:36:38.174 Nvme0n1 00:36:38.174 15:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:36:38.741 [ 00:36:38.741 { 00:36:38.741 "name": "Nvme0n1", 00:36:38.741 "aliases": [ 00:36:38.741 "0b79e2e6-89f8-4f99-bdf3-3a8e6a28fd8d" 00:36:38.741 ], 00:36:38.742 "product_name": "NVMe disk", 00:36:38.742 "block_size": 4096, 00:36:38.742 "num_blocks": 38912, 00:36:38.742 "uuid": "0b79e2e6-89f8-4f99-bdf3-3a8e6a28fd8d", 00:36:38.742 "numa_id": 1, 00:36:38.742 "assigned_rate_limits": { 00:36:38.742 "rw_ios_per_sec": 0, 00:36:38.742 "rw_mbytes_per_sec": 0, 00:36:38.742 "r_mbytes_per_sec": 0, 00:36:38.742 "w_mbytes_per_sec": 0 00:36:38.742 }, 00:36:38.742 "claimed": false, 00:36:38.742 "zoned": false, 00:36:38.742 "supported_io_types": { 00:36:38.742 "read": true, 00:36:38.742 "write": true, 00:36:38.742 "unmap": true, 00:36:38.742 "flush": true, 00:36:38.742 "reset": true, 00:36:38.742 "nvme_admin": true, 00:36:38.742 "nvme_io": true, 00:36:38.742 "nvme_io_md": false, 00:36:38.742 "write_zeroes": true, 00:36:38.742 "zcopy": false, 00:36:38.742 "get_zone_info": false, 00:36:38.742 "zone_management": false, 00:36:38.742 "zone_append": false, 00:36:38.742 "compare": true, 00:36:38.742 "compare_and_write": true, 00:36:38.742 "abort": true, 00:36:38.742 "seek_hole": false, 00:36:38.742 "seek_data": false, 00:36:38.742 "copy": true, 00:36:38.742 "nvme_iov_md": false 00:36:38.742 }, 00:36:38.742 "memory_domains": [ 00:36:38.742 { 00:36:38.742 "dma_device_id": "system", 00:36:38.742 "dma_device_type": 1 00:36:38.742 } 00:36:38.742 ], 00:36:38.742 "driver_specific": { 00:36:38.742 "nvme": [ 00:36:38.742 { 00:36:38.742 "trid": { 00:36:38.742 "trtype": "TCP", 00:36:38.742 "adrfam": "IPv4", 00:36:38.742 "traddr": "10.0.0.2", 00:36:38.742 "trsvcid": "4420", 00:36:38.742 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:36:38.742 }, 00:36:38.742 "ctrlr_data": { 00:36:38.742 "cntlid": 1, 00:36:38.742 "vendor_id": "0x8086", 00:36:38.742 "model_number": "SPDK bdev Controller", 00:36:38.742 "serial_number": "SPDK0", 00:36:38.742 "firmware_revision": "25.01", 00:36:38.742 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:38.742 "oacs": { 00:36:38.742 "security": 0, 00:36:38.742 "format": 0, 00:36:38.742 "firmware": 0, 00:36:38.742 "ns_manage": 0 00:36:38.742 }, 00:36:38.742 "multi_ctrlr": true, 00:36:38.742 "ana_reporting": false 00:36:38.742 }, 00:36:38.742 "vs": { 00:36:38.742 "nvme_version": "1.3" 00:36:38.742 }, 00:36:38.742 "ns_data": { 00:36:38.742 "id": 1, 00:36:38.742 "can_share": true 00:36:38.742 } 00:36:38.742 } 00:36:38.742 ], 00:36:38.742 "mp_policy": "active_passive" 00:36:38.742 } 00:36:38.742 } 00:36:38.742 ] 00:36:38.742 15:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3344231 00:36:38.742 15:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:38.742 15:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:36:39.002 Running I/O for 10 seconds... 00:36:39.942 Latency(us) 00:36:39.942 [2024-10-28T14:32:26.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:39.942 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:39.942 Nvme0n1 : 1.00 7626.00 29.79 0.00 0.00 0.00 0.00 0.00 00:36:39.942 [2024-10-28T14:32:26.809Z] =================================================================================================================== 00:36:39.942 [2024-10-28T14:32:26.809Z] Total : 7626.00 29.79 0.00 0.00 0.00 0.00 0.00 00:36:39.942 00:36:40.879 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1610a9df-bc46-4a55-a83f-046c93656822 00:36:40.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:40.879 Nvme0n1 : 2.00 8459.00 33.04 0.00 0.00 0.00 0.00 0.00 00:36:40.879 [2024-10-28T14:32:27.746Z] =================================================================================================================== 00:36:40.879 [2024-10-28T14:32:27.746Z] Total : 8459.00 33.04 0.00 0.00 0.00 0.00 0.00 00:36:40.879 00:36:41.138 true 00:36:41.138 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1610a9df-bc46-4a55-a83f-046c93656822 00:36:41.138 15:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:36:41.708 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:36:41.708 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:36:41.708 15:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3344231 00:36:41.969 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:41.969 Nvme0n1 : 3.00 7820.00 30.55 0.00 0.00 0.00 0.00 0.00 00:36:41.969 [2024-10-28T14:32:28.836Z] =================================================================================================================== 00:36:41.969 [2024-10-28T14:32:28.836Z] Total : 7820.00 30.55 0.00 0.00 0.00 0.00 0.00 00:36:41.969 00:36:42.908 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:42.908 Nvme0n1 : 4.00 7564.75 29.55 0.00 0.00 0.00 0.00 0.00 00:36:42.908 [2024-10-28T14:32:29.775Z] =================================================================================================================== 00:36:42.908 [2024-10-28T14:32:29.775Z] Total : 7564.75 29.55 0.00 0.00 0.00 0.00 0.00 00:36:42.908 00:36:43.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:43.848 Nvme0n1 : 5.00 7372.60 28.80 0.00 0.00 0.00 0.00 0.00 00:36:43.848 [2024-10-28T14:32:30.715Z] =================================================================================================================== 00:36:43.848 [2024-10-28T14:32:30.715Z] Total : 7372.60 28.80 0.00 0.00 0.00 0.00 0.00 00:36:43.848 00:36:45.229 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:45.229 Nvme0n1 : 6.00 7213.00 28.18 0.00 0.00 0.00 0.00 0.00 00:36:45.229 [2024-10-28T14:32:32.096Z] =================================================================================================================== 00:36:45.229 [2024-10-28T14:32:32.096Z] Total : 7213.00 28.18 0.00 0.00 0.00 0.00 0.00 00:36:45.229 00:36:46.167 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:46.167 Nvme0n1 : 7.00 7107.86 27.77 0.00 0.00 0.00 0.00 0.00 00:36:46.167 [2024-10-28T14:32:33.034Z] =================================================================================================================== 00:36:46.167 [2024-10-28T14:32:33.034Z] Total : 7107.86 27.77 0.00 0.00 0.00 0.00 0.00 00:36:46.167 00:36:47.105 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:47.105 Nvme0n1 : 8.00 7022.00 27.43 0.00 0.00 0.00 0.00 0.00 00:36:47.105 [2024-10-28T14:32:33.972Z] =================================================================================================================== 00:36:47.105 [2024-10-28T14:32:33.972Z] Total : 7022.00 27.43 0.00 0.00 0.00 0.00 0.00 00:36:47.105 00:36:48.043 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:48.043 Nvme0n1 : 9.00 6954.56 27.17 0.00 0.00 0.00 0.00 0.00 00:36:48.043 [2024-10-28T14:32:34.910Z] =================================================================================================================== 00:36:48.043 [2024-10-28T14:32:34.910Z] Total : 6954.56 27.17 0.00 0.00 0.00 0.00 0.00 00:36:48.043 00:36:48.982 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:48.982 Nvme0n1 : 10.00 6907.10 26.98 0.00 0.00 0.00 0.00 0.00 00:36:48.982 [2024-10-28T14:32:35.849Z] =================================================================================================================== 00:36:48.982 [2024-10-28T14:32:35.849Z] Total : 6907.10 26.98 0.00 0.00 0.00 0.00 0.00 00:36:48.982 00:36:48.982 00:36:48.982 Latency(us) 00:36:48.982 [2024-10-28T14:32:35.849Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:48.982 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:48.982 Nvme0n1 : 10.01 6911.57 27.00 0.00 0.00 18507.74 5461.33 38059.43 00:36:48.982 [2024-10-28T14:32:35.849Z] =================================================================================================================== 00:36:48.982 [2024-10-28T14:32:35.849Z] Total : 6911.57 27.00 0.00 0.00 18507.74 5461.33 38059.43 00:36:48.982 { 00:36:48.982 "results": [ 00:36:48.982 { 00:36:48.982 "job": "Nvme0n1", 00:36:48.982 "core_mask": "0x2", 00:36:48.982 "workload": "randwrite", 00:36:48.982 "status": "finished", 00:36:48.982 "queue_depth": 128, 00:36:48.982 "io_size": 4096, 00:36:48.982 "runtime": 10.012047, 00:36:48.982 "iops": 6911.57362725125, 00:36:48.982 "mibps": 26.998334481450197, 00:36:48.982 "io_failed": 0, 00:36:48.982 "io_timeout": 0, 00:36:48.982 "avg_latency_us": 18507.736625523918, 00:36:48.982 "min_latency_us": 5461.333333333333, 00:36:48.982 "max_latency_us": 38059.42518518519 00:36:48.982 } 00:36:48.982 ], 00:36:48.982 "core_count": 1 00:36:48.982 } 00:36:48.982 15:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3343982 00:36:48.982 15:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 3343982 ']' 00:36:48.982 15:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 3343982 00:36:48.982 15:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:36:48.983 15:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:48.983 15:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3343982 00:36:48.983 15:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:48.983 15:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:48.983 15:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3343982' 00:36:48.983 killing process with pid 3343982 00:36:48.983 15:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 3343982 00:36:48.983 Received shutdown signal, test time was about 10.000000 seconds 00:36:48.983 00:36:48.983 Latency(us) 00:36:48.983 [2024-10-28T14:32:35.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:48.983 [2024-10-28T14:32:35.850Z] =================================================================================================================== 00:36:48.983 [2024-10-28T14:32:35.850Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:48.983 15:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 3343982 00:36:49.554 15:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:50.126 15:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:50.779 15:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1610a9df-bc46-4a55-a83f-046c93656822 00:36:50.779 15:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:36:51.040 15:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:36:51.040 15:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:36:51.040 15:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:36:51.981 [2024-10-28 15:32:38.566375] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:36:51.981 15:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1610a9df-bc46-4a55-a83f-046c93656822 00:36:51.981 15:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:36:51.981 15:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1610a9df-bc46-4a55-a83f-046c93656822 00:36:51.981 15:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:51.981 15:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:51.981 15:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:51.981 15:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:51.981 15:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:51.981 15:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:51.981 15:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:51.981 15:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:36:51.981 15:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1610a9df-bc46-4a55-a83f-046c93656822 00:36:52.241 request: 00:36:52.241 { 00:36:52.241 "uuid": "1610a9df-bc46-4a55-a83f-046c93656822", 00:36:52.241 "method": "bdev_lvol_get_lvstores", 00:36:52.241 "req_id": 1 00:36:52.241 } 00:36:52.241 Got JSON-RPC error response 00:36:52.241 response: 00:36:52.241 { 00:36:52.241 "code": -19, 00:36:52.241 "message": "No such device" 00:36:52.241 } 00:36:52.241 15:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:36:52.241 15:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:52.241 15:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:52.241 15:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:52.242 15:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:52.826 aio_bdev 00:36:52.826 15:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0b79e2e6-89f8-4f99-bdf3-3a8e6a28fd8d 00:36:52.826 15:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=0b79e2e6-89f8-4f99-bdf3-3a8e6a28fd8d 00:36:52.826 15:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:36:52.826 15:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:36:52.826 15:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:36:52.826 15:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:36:52.826 15:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:53.395 15:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0b79e2e6-89f8-4f99-bdf3-3a8e6a28fd8d -t 2000 00:36:53.977 [ 00:36:53.977 { 00:36:53.977 "name": "0b79e2e6-89f8-4f99-bdf3-3a8e6a28fd8d", 00:36:53.977 "aliases": [ 00:36:53.977 "lvs/lvol" 00:36:53.977 ], 00:36:53.977 "product_name": "Logical Volume", 00:36:53.977 "block_size": 4096, 00:36:53.977 "num_blocks": 38912, 00:36:53.977 "uuid": "0b79e2e6-89f8-4f99-bdf3-3a8e6a28fd8d", 00:36:53.977 "assigned_rate_limits": { 00:36:53.977 "rw_ios_per_sec": 0, 00:36:53.977 "rw_mbytes_per_sec": 0, 00:36:53.977 "r_mbytes_per_sec": 0, 00:36:53.977 "w_mbytes_per_sec": 0 00:36:53.977 }, 00:36:53.977 "claimed": false, 00:36:53.977 "zoned": false, 00:36:53.977 "supported_io_types": { 00:36:53.977 "read": true, 00:36:53.977 "write": true, 00:36:53.977 "unmap": true, 00:36:53.977 "flush": false, 00:36:53.977 "reset": true, 00:36:53.977 "nvme_admin": false, 00:36:53.977 "nvme_io": false, 00:36:53.977 "nvme_io_md": false, 00:36:53.977 "write_zeroes": true, 00:36:53.977 "zcopy": false, 00:36:53.977 "get_zone_info": false, 00:36:53.977 "zone_management": false, 00:36:53.977 "zone_append": false, 00:36:53.977 "compare": false, 00:36:53.977 "compare_and_write": false, 00:36:53.977 "abort": false, 00:36:53.977 "seek_hole": true, 00:36:53.977 "seek_data": true, 00:36:53.977 "copy": false, 00:36:53.977 "nvme_iov_md": false 00:36:53.977 }, 00:36:53.977 "driver_specific": { 00:36:53.977 "lvol": { 00:36:53.977 "lvol_store_uuid": "1610a9df-bc46-4a55-a83f-046c93656822", 00:36:53.977 "base_bdev": "aio_bdev", 00:36:53.977 "thin_provision": false, 00:36:53.977 "num_allocated_clusters": 38, 00:36:53.977 "snapshot": false, 00:36:53.977 "clone": false, 00:36:53.977 "esnap_clone": false 00:36:53.977 } 00:36:53.977 } 00:36:53.977 } 00:36:53.977 ] 00:36:53.977 15:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:36:53.977 15:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1610a9df-bc46-4a55-a83f-046c93656822 00:36:53.977 15:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:36:54.235 15:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:36:54.235 15:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1610a9df-bc46-4a55-a83f-046c93656822 00:36:54.235 15:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:36:54.802 15:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:36:54.802 15:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0b79e2e6-89f8-4f99-bdf3-3a8e6a28fd8d 00:36:55.371 15:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1610a9df-bc46-4a55-a83f-046c93656822 00:36:55.940 15:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:36:56.890 15:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:56.890 00:36:56.890 real 0m26.073s 00:36:56.890 user 0m25.381s 00:36:56.890 sys 0m2.842s 00:36:56.890 15:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:56.890 15:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:36:56.890 ************************************ 00:36:56.890 END TEST lvs_grow_clean 00:36:56.890 ************************************ 00:36:56.890 15:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:36:56.890 15:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:56.890 15:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:56.890 15:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:56.890 ************************************ 00:36:56.890 START TEST lvs_grow_dirty 00:36:56.890 ************************************ 00:36:56.890 15:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:36:56.890 15:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:36:56.890 15:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:36:56.890 15:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:36:56.890 15:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:36:56.890 15:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:36:56.890 15:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:36:56.890 15:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:56.890 15:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:56.890 15:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:57.456 15:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:36:57.456 15:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:36:58.025 15:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=251acb69-b020-421e-b5a0-dffed2fa7b76 00:36:58.025 15:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:36:58.025 15:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 251acb69-b020-421e-b5a0-dffed2fa7b76 00:36:58.963 15:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:36:58.963 15:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:36:58.963 15:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 251acb69-b020-421e-b5a0-dffed2fa7b76 lvol 150 00:36:59.532 15:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b4007b78-8c6b-4f83-9705-568e465a3813 00:36:59.532 15:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:59.532 15:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:36:59.790 [2024-10-28 15:32:46.594178] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:36:59.790 [2024-10-28 15:32:46.594295] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:36:59.790 true 00:36:59.790 15:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 251acb69-b020-421e-b5a0-dffed2fa7b76 00:36:59.790 15:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:00.360 15:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:00.360 15:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:00.931 15:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b4007b78-8c6b-4f83-9705-568e465a3813 00:37:01.501 15:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:02.439 [2024-10-28 15:32:49.030567] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:02.439 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:02.698 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3346915 00:37:02.698 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:02.698 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:02.698 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3346915 /var/tmp/bdevperf.sock 00:37:02.698 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3346915 ']' 00:37:02.698 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:02.698 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:02.698 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:02.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:02.698 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:02.698 15:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:02.698 [2024-10-28 15:32:49.536397] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:37:02.698 [2024-10-28 15:32:49.536571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3346915 ] 00:37:02.958 [2024-10-28 15:32:49.681459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:02.958 [2024-10-28 15:32:49.789580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:03.218 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:03.218 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:37:03.218 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:03.788 Nvme0n1 00:37:03.788 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:04.048 [ 00:37:04.048 { 00:37:04.048 "name": "Nvme0n1", 00:37:04.048 "aliases": [ 00:37:04.048 "b4007b78-8c6b-4f83-9705-568e465a3813" 00:37:04.048 ], 00:37:04.048 "product_name": "NVMe disk", 00:37:04.048 "block_size": 4096, 00:37:04.048 "num_blocks": 38912, 00:37:04.048 "uuid": "b4007b78-8c6b-4f83-9705-568e465a3813", 00:37:04.048 "numa_id": 1, 00:37:04.048 "assigned_rate_limits": { 00:37:04.048 "rw_ios_per_sec": 0, 00:37:04.048 "rw_mbytes_per_sec": 0, 00:37:04.048 "r_mbytes_per_sec": 0, 00:37:04.048 "w_mbytes_per_sec": 0 00:37:04.048 }, 00:37:04.048 "claimed": false, 00:37:04.048 "zoned": false, 00:37:04.048 "supported_io_types": { 00:37:04.048 "read": true, 00:37:04.048 "write": true, 00:37:04.048 "unmap": true, 00:37:04.048 "flush": true, 00:37:04.048 "reset": true, 00:37:04.048 "nvme_admin": true, 00:37:04.048 "nvme_io": true, 00:37:04.048 "nvme_io_md": false, 00:37:04.048 "write_zeroes": true, 00:37:04.048 "zcopy": false, 00:37:04.048 "get_zone_info": false, 00:37:04.048 "zone_management": false, 00:37:04.048 "zone_append": false, 00:37:04.048 "compare": true, 00:37:04.048 "compare_and_write": true, 00:37:04.048 "abort": true, 00:37:04.048 "seek_hole": false, 00:37:04.048 "seek_data": false, 00:37:04.048 "copy": true, 00:37:04.048 "nvme_iov_md": false 00:37:04.048 }, 00:37:04.048 "memory_domains": [ 00:37:04.048 { 00:37:04.048 "dma_device_id": "system", 00:37:04.049 "dma_device_type": 1 00:37:04.049 } 00:37:04.049 ], 00:37:04.049 "driver_specific": { 00:37:04.049 "nvme": [ 00:37:04.049 { 00:37:04.049 "trid": { 00:37:04.049 "trtype": "TCP", 00:37:04.049 "adrfam": "IPv4", 00:37:04.049 "traddr": "10.0.0.2", 00:37:04.049 "trsvcid": "4420", 00:37:04.049 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:04.049 }, 00:37:04.049 "ctrlr_data": { 00:37:04.049 "cntlid": 1, 00:37:04.049 "vendor_id": "0x8086", 00:37:04.049 "model_number": "SPDK bdev Controller", 00:37:04.049 "serial_number": "SPDK0", 00:37:04.049 "firmware_revision": "25.01", 00:37:04.049 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:04.049 "oacs": { 00:37:04.049 "security": 0, 00:37:04.049 "format": 0, 00:37:04.049 "firmware": 0, 00:37:04.049 "ns_manage": 0 00:37:04.049 }, 00:37:04.049 "multi_ctrlr": true, 00:37:04.049 "ana_reporting": false 00:37:04.049 }, 00:37:04.049 "vs": { 00:37:04.049 "nvme_version": "1.3" 00:37:04.049 }, 00:37:04.049 "ns_data": { 00:37:04.049 "id": 1, 00:37:04.049 "can_share": true 00:37:04.049 } 00:37:04.049 } 00:37:04.049 ], 00:37:04.049 "mp_policy": "active_passive" 00:37:04.049 } 00:37:04.049 } 00:37:04.049 ] 00:37:04.308 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3347054 00:37:04.308 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:04.308 15:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:04.568 Running I/O for 10 seconds... 00:37:05.506 Latency(us) 00:37:05.506 [2024-10-28T14:32:52.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:05.506 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:05.506 Nvme0n1 : 1.00 6480.00 25.31 0.00 0.00 0.00 0.00 0.00 00:37:05.506 [2024-10-28T14:32:52.373Z] =================================================================================================================== 00:37:05.506 [2024-10-28T14:32:52.373Z] Total : 6480.00 25.31 0.00 0.00 0.00 0.00 0.00 00:37:05.506 00:37:06.443 15:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 251acb69-b020-421e-b5a0-dffed2fa7b76 00:37:06.443 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:06.443 Nvme0n1 : 2.00 6607.00 25.81 0.00 0.00 0.00 0.00 0.00 00:37:06.443 [2024-10-28T14:32:53.310Z] =================================================================================================================== 00:37:06.443 [2024-10-28T14:32:53.310Z] Total : 6607.00 25.81 0.00 0.00 0.00 0.00 0.00 00:37:06.443 00:37:06.443 true 00:37:06.702 15:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 251acb69-b020-421e-b5a0-dffed2fa7b76 00:37:06.702 15:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:37:06.963 15:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:37:06.963 15:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:37:06.963 15:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3347054 00:37:07.534 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:07.534 Nvme0n1 : 3.00 6698.00 26.16 0.00 0.00 0.00 0.00 0.00 00:37:07.534 [2024-10-28T14:32:54.401Z] =================================================================================================================== 00:37:07.534 [2024-10-28T14:32:54.401Z] Total : 6698.00 26.16 0.00 0.00 0.00 0.00 0.00 00:37:07.534 00:37:08.475 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:08.475 Nvme0n1 : 4.00 6731.50 26.29 0.00 0.00 0.00 0.00 0.00 00:37:08.475 [2024-10-28T14:32:55.342Z] =================================================================================================================== 00:37:08.475 [2024-10-28T14:32:55.342Z] Total : 6731.50 26.29 0.00 0.00 0.00 0.00 0.00 00:37:08.475 00:37:09.415 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:09.415 Nvme0n1 : 5.00 6670.00 26.05 0.00 0.00 0.00 0.00 0.00 00:37:09.415 [2024-10-28T14:32:56.282Z] =================================================================================================================== 00:37:09.415 [2024-10-28T14:32:56.282Z] Total : 6670.00 26.05 0.00 0.00 0.00 0.00 0.00 00:37:09.415 00:37:10.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:10.795 Nvme0n1 : 6.00 6964.17 27.20 0.00 0.00 0.00 0.00 0.00 00:37:10.795 [2024-10-28T14:32:57.662Z] =================================================================================================================== 00:37:10.795 [2024-10-28T14:32:57.662Z] Total : 6964.17 27.20 0.00 0.00 0.00 0.00 0.00 00:37:10.795 00:37:11.735 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:11.735 Nvme0n1 : 7.00 8144.71 31.82 0.00 0.00 0.00 0.00 0.00 00:37:11.735 [2024-10-28T14:32:58.602Z] =================================================================================================================== 00:37:11.735 [2024-10-28T14:32:58.602Z] Total : 8144.71 31.82 0.00 0.00 0.00 0.00 0.00 00:37:11.735 00:37:12.672 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:12.672 Nvme0n1 : 8.00 7992.38 31.22 0.00 0.00 0.00 0.00 0.00 00:37:12.672 [2024-10-28T14:32:59.539Z] =================================================================================================================== 00:37:12.672 [2024-10-28T14:32:59.539Z] Total : 7992.38 31.22 0.00 0.00 0.00 0.00 0.00 00:37:12.672 00:37:13.610 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:13.610 Nvme0n1 : 9.00 8710.11 34.02 0.00 0.00 0.00 0.00 0.00 00:37:13.610 [2024-10-28T14:33:00.477Z] =================================================================================================================== 00:37:13.610 [2024-10-28T14:33:00.477Z] Total : 8710.11 34.02 0.00 0.00 0.00 0.00 0.00 00:37:13.610 00:37:14.552 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:14.552 Nvme0n1 : 10.00 8665.80 33.85 0.00 0.00 0.00 0.00 0.00 00:37:14.552 [2024-10-28T14:33:01.419Z] =================================================================================================================== 00:37:14.552 [2024-10-28T14:33:01.419Z] Total : 8665.80 33.85 0.00 0.00 0.00 0.00 0.00 00:37:14.552 00:37:14.552 00:37:14.552 Latency(us) 00:37:14.552 [2024-10-28T14:33:01.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:14.552 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:14.552 Nvme0n1 : 10.01 8670.92 33.87 0.00 0.00 14751.96 4611.79 40389.59 00:37:14.552 [2024-10-28T14:33:01.419Z] =================================================================================================================== 00:37:14.552 [2024-10-28T14:33:01.419Z] Total : 8670.92 33.87 0.00 0.00 14751.96 4611.79 40389.59 00:37:14.552 { 00:37:14.552 "results": [ 00:37:14.552 { 00:37:14.552 "job": "Nvme0n1", 00:37:14.552 "core_mask": "0x2", 00:37:14.552 "workload": "randwrite", 00:37:14.552 "status": "finished", 00:37:14.552 "queue_depth": 128, 00:37:14.552 "io_size": 4096, 00:37:14.552 "runtime": 10.008852, 00:37:14.552 "iops": 8670.924497634694, 00:37:14.552 "mibps": 33.870798818885525, 00:37:14.552 "io_failed": 0, 00:37:14.552 "io_timeout": 0, 00:37:14.552 "avg_latency_us": 14751.963913210102, 00:37:14.552 "min_latency_us": 4611.792592592593, 00:37:14.552 "max_latency_us": 40389.59407407408 00:37:14.552 } 00:37:14.552 ], 00:37:14.552 "core_count": 1 00:37:14.552 } 00:37:14.552 15:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3346915 00:37:14.552 15:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 3346915 ']' 00:37:14.552 15:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 3346915 00:37:14.552 15:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:37:14.552 15:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:14.552 15:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3346915 00:37:14.552 15:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:14.552 15:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:14.552 15:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3346915' 00:37:14.552 killing process with pid 3346915 00:37:14.552 15:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 3346915 00:37:14.552 Received shutdown signal, test time was about 10.000000 seconds 00:37:14.552 00:37:14.552 Latency(us) 00:37:14.552 [2024-10-28T14:33:01.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:14.552 [2024-10-28T14:33:01.419Z] =================================================================================================================== 00:37:14.552 [2024-10-28T14:33:01.419Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:14.552 15:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 3346915 00:37:14.812 15:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:15.751 15:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:16.320 15:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 251acb69-b020-421e-b5a0-dffed2fa7b76 00:37:16.320 15:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:37:16.579 15:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:37:16.579 15:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:37:16.579 15:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3343139 00:37:16.579 15:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3343139 00:37:16.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3343139 Killed "${NVMF_APP[@]}" "$@" 00:37:16.579 15:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:37:16.579 15:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:37:16.579 15:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:16.579 15:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:16.579 15:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:16.579 15:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3348603 00:37:16.579 15:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:37:16.579 15:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3348603 00:37:16.579 15:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3348603 ']' 00:37:16.579 15:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:16.579 15:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:16.579 15:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:16.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:16.579 15:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:16.579 15:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:16.842 [2024-10-28 15:33:03.494510] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:16.842 [2024-10-28 15:33:03.495797] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:37:16.842 [2024-10-28 15:33:03.495871] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:16.842 [2024-10-28 15:33:03.638420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:17.106 [2024-10-28 15:33:03.760444] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:17.106 [2024-10-28 15:33:03.760553] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:17.106 [2024-10-28 15:33:03.760590] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:17.106 [2024-10-28 15:33:03.760630] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:17.106 [2024-10-28 15:33:03.760672] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:17.106 [2024-10-28 15:33:03.762004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:17.106 [2024-10-28 15:33:03.939266] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:17.106 [2024-10-28 15:33:03.939998] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:17.365 15:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:17.365 15:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:37:17.365 15:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:17.365 15:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:17.365 15:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:17.365 15:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:17.365 15:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:18.307 [2024-10-28 15:33:04.844575] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:37:18.307 [2024-10-28 15:33:04.844898] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:37:18.307 [2024-10-28 15:33:04.845029] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:37:18.307 15:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:37:18.307 15:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b4007b78-8c6b-4f83-9705-568e465a3813 00:37:18.307 15:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=b4007b78-8c6b-4f83-9705-568e465a3813 00:37:18.307 15:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:18.307 15:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:37:18.307 15:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:18.307 15:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:18.307 15:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:18.877 15:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b4007b78-8c6b-4f83-9705-568e465a3813 -t 2000 00:37:19.446 [ 00:37:19.446 { 00:37:19.446 "name": "b4007b78-8c6b-4f83-9705-568e465a3813", 00:37:19.446 "aliases": [ 00:37:19.446 "lvs/lvol" 00:37:19.446 ], 00:37:19.446 "product_name": "Logical Volume", 00:37:19.446 "block_size": 4096, 00:37:19.446 "num_blocks": 38912, 00:37:19.446 "uuid": "b4007b78-8c6b-4f83-9705-568e465a3813", 00:37:19.446 "assigned_rate_limits": { 00:37:19.446 "rw_ios_per_sec": 0, 00:37:19.446 "rw_mbytes_per_sec": 0, 00:37:19.446 "r_mbytes_per_sec": 0, 00:37:19.446 "w_mbytes_per_sec": 0 00:37:19.446 }, 00:37:19.446 "claimed": false, 00:37:19.446 "zoned": false, 00:37:19.446 "supported_io_types": { 00:37:19.446 "read": true, 00:37:19.446 "write": true, 00:37:19.446 "unmap": true, 00:37:19.446 "flush": false, 00:37:19.447 "reset": true, 00:37:19.447 "nvme_admin": false, 00:37:19.447 "nvme_io": false, 00:37:19.447 "nvme_io_md": false, 00:37:19.447 "write_zeroes": true, 00:37:19.447 "zcopy": false, 00:37:19.447 "get_zone_info": false, 00:37:19.447 "zone_management": false, 00:37:19.447 "zone_append": false, 00:37:19.447 "compare": false, 00:37:19.447 "compare_and_write": false, 00:37:19.447 "abort": false, 00:37:19.447 "seek_hole": true, 00:37:19.447 "seek_data": true, 00:37:19.447 "copy": false, 00:37:19.447 "nvme_iov_md": false 00:37:19.447 }, 00:37:19.447 "driver_specific": { 00:37:19.447 "lvol": { 00:37:19.447 "lvol_store_uuid": "251acb69-b020-421e-b5a0-dffed2fa7b76", 00:37:19.447 "base_bdev": "aio_bdev", 00:37:19.447 "thin_provision": false, 00:37:19.447 "num_allocated_clusters": 38, 00:37:19.447 "snapshot": false, 00:37:19.447 "clone": false, 00:37:19.447 "esnap_clone": false 00:37:19.447 } 00:37:19.447 } 00:37:19.447 } 00:37:19.447 ] 00:37:19.447 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:37:19.447 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 251acb69-b020-421e-b5a0-dffed2fa7b76 00:37:19.447 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:37:20.386 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:37:20.386 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 251acb69-b020-421e-b5a0-dffed2fa7b76 00:37:20.386 15:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:37:20.957 15:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:37:20.957 15:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:21.526 [2024-10-28 15:33:08.147124] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:37:21.526 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 251acb69-b020-421e-b5a0-dffed2fa7b76 00:37:21.526 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:37:21.526 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 251acb69-b020-421e-b5a0-dffed2fa7b76 00:37:21.526 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:21.526 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:21.526 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:21.526 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:21.526 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:21.526 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:21.526 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:21.526 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:37:21.526 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 251acb69-b020-421e-b5a0-dffed2fa7b76 00:37:22.097 request: 00:37:22.097 { 00:37:22.097 "uuid": "251acb69-b020-421e-b5a0-dffed2fa7b76", 00:37:22.097 "method": "bdev_lvol_get_lvstores", 00:37:22.097 "req_id": 1 00:37:22.097 } 00:37:22.097 Got JSON-RPC error response 00:37:22.097 response: 00:37:22.097 { 00:37:22.097 "code": -19, 00:37:22.097 "message": "No such device" 00:37:22.097 } 00:37:22.097 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:37:22.097 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:22.097 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:22.097 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:22.097 15:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:23.037 aio_bdev 00:37:23.037 15:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b4007b78-8c6b-4f83-9705-568e465a3813 00:37:23.037 15:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=b4007b78-8c6b-4f83-9705-568e465a3813 00:37:23.037 15:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:23.037 15:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:37:23.037 15:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:23.037 15:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:23.037 15:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:23.603 15:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b4007b78-8c6b-4f83-9705-568e465a3813 -t 2000 00:37:23.863 [ 00:37:23.863 { 00:37:23.863 "name": "b4007b78-8c6b-4f83-9705-568e465a3813", 00:37:23.863 "aliases": [ 00:37:23.863 "lvs/lvol" 00:37:23.863 ], 00:37:23.863 "product_name": "Logical Volume", 00:37:23.863 "block_size": 4096, 00:37:23.863 "num_blocks": 38912, 00:37:23.863 "uuid": "b4007b78-8c6b-4f83-9705-568e465a3813", 00:37:23.863 "assigned_rate_limits": { 00:37:23.864 "rw_ios_per_sec": 0, 00:37:23.864 "rw_mbytes_per_sec": 0, 00:37:23.864 "r_mbytes_per_sec": 0, 00:37:23.864 "w_mbytes_per_sec": 0 00:37:23.864 }, 00:37:23.864 "claimed": false, 00:37:23.864 "zoned": false, 00:37:23.864 "supported_io_types": { 00:37:23.864 "read": true, 00:37:23.864 "write": true, 00:37:23.864 "unmap": true, 00:37:23.864 "flush": false, 00:37:23.864 "reset": true, 00:37:23.864 "nvme_admin": false, 00:37:23.864 "nvme_io": false, 00:37:23.864 "nvme_io_md": false, 00:37:23.864 "write_zeroes": true, 00:37:23.864 "zcopy": false, 00:37:23.864 "get_zone_info": false, 00:37:23.864 "zone_management": false, 00:37:23.864 "zone_append": false, 00:37:23.864 "compare": false, 00:37:23.864 "compare_and_write": false, 00:37:23.864 "abort": false, 00:37:23.864 "seek_hole": true, 00:37:23.864 "seek_data": true, 00:37:23.864 "copy": false, 00:37:23.864 "nvme_iov_md": false 00:37:23.864 }, 00:37:23.864 "driver_specific": { 00:37:23.864 "lvol": { 00:37:23.864 "lvol_store_uuid": "251acb69-b020-421e-b5a0-dffed2fa7b76", 00:37:23.864 "base_bdev": "aio_bdev", 00:37:23.864 "thin_provision": false, 00:37:23.864 "num_allocated_clusters": 38, 00:37:23.864 "snapshot": false, 00:37:23.864 "clone": false, 00:37:23.864 "esnap_clone": false 00:37:23.864 } 00:37:23.864 } 00:37:23.864 } 00:37:23.864 ] 00:37:23.864 15:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:37:23.864 15:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 251acb69-b020-421e-b5a0-dffed2fa7b76 00:37:23.864 15:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:37:24.437 15:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:37:24.437 15:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 251acb69-b020-421e-b5a0-dffed2fa7b76 00:37:24.437 15:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:37:25.431 15:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:37:25.431 15:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b4007b78-8c6b-4f83-9705-568e465a3813 00:37:25.689 15:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 251acb69-b020-421e-b5a0-dffed2fa7b76 00:37:26.288 15:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:26.857 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:26.857 00:37:26.857 real 0m29.994s 00:37:26.857 user 0m46.483s 00:37:26.857 sys 0m6.305s 00:37:26.857 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:26.857 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:26.857 ************************************ 00:37:26.857 END TEST lvs_grow_dirty 00:37:26.857 ************************************ 00:37:26.857 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:37:26.857 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:37:26.857 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:37:26.857 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:37:26.857 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:37:26.857 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:37:26.857 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:37:26.857 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:37:26.857 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:37:26.857 nvmf_trace.0 00:37:26.857 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:37:26.857 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:37:26.857 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:26.857 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:37:26.857 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:26.857 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:37:26.857 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:26.857 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:26.857 rmmod nvme_tcp 00:37:26.857 rmmod nvme_fabrics 00:37:26.857 rmmod nvme_keyring 00:37:26.857 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:26.857 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:37:26.857 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:37:26.857 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3348603 ']' 00:37:26.857 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3348603 00:37:26.857 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 3348603 ']' 00:37:26.857 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 3348603 00:37:26.857 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:37:26.857 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:26.857 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3348603 00:37:27.116 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:27.116 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:27.116 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3348603' 00:37:27.116 killing process with pid 3348603 00:37:27.116 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 3348603 00:37:27.116 15:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 3348603 00:37:27.376 15:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:27.376 15:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:27.376 15:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:27.376 15:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:37:27.376 15:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:37:27.376 15:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:27.376 15:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:37:27.376 15:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:27.376 15:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:27.376 15:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:27.376 15:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:27.376 15:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:29.287 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:29.287 00:37:29.287 real 1m3.178s 00:37:29.287 user 1m14.645s 00:37:29.287 sys 0m12.096s 00:37:29.287 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:29.287 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:29.287 ************************************ 00:37:29.287 END TEST nvmf_lvs_grow 00:37:29.287 ************************************ 00:37:29.545 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:37:29.545 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:29.545 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:29.545 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:29.545 ************************************ 00:37:29.545 START TEST nvmf_bdev_io_wait 00:37:29.545 ************************************ 00:37:29.545 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:37:29.545 * Looking for test storage... 00:37:29.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:29.545 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:37:29.545 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1689 -- # lcov --version 00:37:29.545 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:37:29.805 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:37:29.805 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:29.805 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:29.805 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:37:29.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.806 --rc genhtml_branch_coverage=1 00:37:29.806 --rc genhtml_function_coverage=1 00:37:29.806 --rc genhtml_legend=1 00:37:29.806 --rc geninfo_all_blocks=1 00:37:29.806 --rc geninfo_unexecuted_blocks=1 00:37:29.806 00:37:29.806 ' 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:37:29.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.806 --rc genhtml_branch_coverage=1 00:37:29.806 --rc genhtml_function_coverage=1 00:37:29.806 --rc genhtml_legend=1 00:37:29.806 --rc geninfo_all_blocks=1 00:37:29.806 --rc geninfo_unexecuted_blocks=1 00:37:29.806 00:37:29.806 ' 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:37:29.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.806 --rc genhtml_branch_coverage=1 00:37:29.806 --rc genhtml_function_coverage=1 00:37:29.806 --rc genhtml_legend=1 00:37:29.806 --rc geninfo_all_blocks=1 00:37:29.806 --rc geninfo_unexecuted_blocks=1 00:37:29.806 00:37:29.806 ' 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:37:29.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.806 --rc genhtml_branch_coverage=1 00:37:29.806 --rc genhtml_function_coverage=1 00:37:29.806 --rc genhtml_legend=1 00:37:29.806 --rc geninfo_all_blocks=1 00:37:29.806 --rc geninfo_unexecuted_blocks=1 00:37:29.806 00:37:29.806 ' 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:29.806 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:29.807 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:29.807 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:29.807 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:29.807 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:29.807 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:29.807 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:29.807 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:29.807 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:29.807 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:37:29.807 15:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:37:33.105 Found 0000:84:00.0 (0x8086 - 0x159b) 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:37:33.105 Found 0000:84:00.1 (0x8086 - 0x159b) 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:37:33.105 Found net devices under 0000:84:00.0: cvl_0_0 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:33.105 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:37:33.106 Found net devices under 0000:84:00.1: cvl_0_1 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:33.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:33.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:37:33.106 00:37:33.106 --- 10.0.0.2 ping statistics --- 00:37:33.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:33.106 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:33.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:33.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:37:33.106 00:37:33.106 --- 10.0.0.1 ping statistics --- 00:37:33.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:33.106 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3352310 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3352310 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 3352310 ']' 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:33.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:33.106 15:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:33.106 [2024-10-28 15:33:19.710035] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:33.106 [2024-10-28 15:33:19.712324] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:37:33.106 [2024-10-28 15:33:19.712461] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:33.106 [2024-10-28 15:33:19.879863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:33.366 [2024-10-28 15:33:20.001711] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:33.366 [2024-10-28 15:33:20.001780] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:33.366 [2024-10-28 15:33:20.001802] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:33.366 [2024-10-28 15:33:20.001818] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:33.366 [2024-10-28 15:33:20.001833] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:33.366 [2024-10-28 15:33:20.004172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:33.366 [2024-10-28 15:33:20.004219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:33.366 [2024-10-28 15:33:20.004288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:33.366 [2024-10-28 15:33:20.004291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:33.366 [2024-10-28 15:33:20.005049] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:34.302 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:34.302 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:37:34.302 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:34.302 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:34.302 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:34.302 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:34.302 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:37:34.302 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.302 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:34.302 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.302 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:37:34.302 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.302 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:34.560 [2024-10-28 15:33:21.183066] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:34.560 [2024-10-28 15:33:21.183343] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:34.560 [2024-10-28 15:33:21.184295] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:34.560 [2024-10-28 15:33:21.185303] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:34.560 [2024-10-28 15:33:21.197487] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:34.560 Malloc0 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:34.560 [2024-10-28 15:33:21.273461] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3352590 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3352592 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3352594 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:34.560 { 00:37:34.560 "params": { 00:37:34.560 "name": "Nvme$subsystem", 00:37:34.560 "trtype": "$TEST_TRANSPORT", 00:37:34.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:34.560 "adrfam": "ipv4", 00:37:34.560 "trsvcid": "$NVMF_PORT", 00:37:34.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:34.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:34.560 "hdgst": ${hdgst:-false}, 00:37:34.560 "ddgst": ${ddgst:-false} 00:37:34.560 }, 00:37:34.560 "method": "bdev_nvme_attach_controller" 00:37:34.560 } 00:37:34.560 EOF 00:37:34.560 )") 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:34.560 { 00:37:34.560 "params": { 00:37:34.560 "name": "Nvme$subsystem", 00:37:34.560 "trtype": "$TEST_TRANSPORT", 00:37:34.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:34.560 "adrfam": "ipv4", 00:37:34.560 "trsvcid": "$NVMF_PORT", 00:37:34.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:34.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:34.560 "hdgst": ${hdgst:-false}, 00:37:34.560 "ddgst": ${ddgst:-false} 00:37:34.560 }, 00:37:34.560 "method": "bdev_nvme_attach_controller" 00:37:34.560 } 00:37:34.560 EOF 00:37:34.560 )") 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3352596 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:37:34.560 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:34.560 { 00:37:34.560 "params": { 00:37:34.560 "name": "Nvme$subsystem", 00:37:34.560 "trtype": "$TEST_TRANSPORT", 00:37:34.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:34.560 "adrfam": "ipv4", 00:37:34.560 "trsvcid": "$NVMF_PORT", 00:37:34.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:34.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:34.560 "hdgst": ${hdgst:-false}, 00:37:34.560 "ddgst": ${ddgst:-false} 00:37:34.560 }, 00:37:34.560 "method": "bdev_nvme_attach_controller" 00:37:34.560 } 00:37:34.560 EOF 00:37:34.560 )") 00:37:34.561 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:37:34.561 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:37:34.561 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:37:34.561 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:37:34.561 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:37:34.561 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:34.561 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:34.561 { 00:37:34.561 "params": { 00:37:34.561 "name": "Nvme$subsystem", 00:37:34.561 "trtype": "$TEST_TRANSPORT", 00:37:34.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:34.561 "adrfam": "ipv4", 00:37:34.561 "trsvcid": "$NVMF_PORT", 00:37:34.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:34.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:34.561 "hdgst": ${hdgst:-false}, 00:37:34.561 "ddgst": ${ddgst:-false} 00:37:34.561 }, 00:37:34.561 "method": "bdev_nvme_attach_controller" 00:37:34.561 } 00:37:34.561 EOF 00:37:34.561 )") 00:37:34.561 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:37:34.561 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3352590 00:37:34.561 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:37:34.561 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:37:34.561 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:37:34.561 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:37:34.561 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:37:34.561 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:37:34.561 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:34.561 "params": { 00:37:34.561 "name": "Nvme1", 00:37:34.561 "trtype": "tcp", 00:37:34.561 "traddr": "10.0.0.2", 00:37:34.561 "adrfam": "ipv4", 00:37:34.561 "trsvcid": "4420", 00:37:34.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:34.561 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:34.561 "hdgst": false, 00:37:34.561 "ddgst": false 00:37:34.561 }, 00:37:34.561 "method": "bdev_nvme_attach_controller" 00:37:34.561 }' 00:37:34.561 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:37:34.561 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:34.561 "params": { 00:37:34.561 "name": "Nvme1", 00:37:34.561 "trtype": "tcp", 00:37:34.561 "traddr": "10.0.0.2", 00:37:34.561 "adrfam": "ipv4", 00:37:34.561 "trsvcid": "4420", 00:37:34.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:34.561 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:34.561 "hdgst": false, 00:37:34.561 "ddgst": false 00:37:34.561 }, 00:37:34.561 "method": "bdev_nvme_attach_controller" 00:37:34.561 }' 00:37:34.561 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:37:34.561 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:34.561 "params": { 00:37:34.561 "name": "Nvme1", 00:37:34.561 "trtype": "tcp", 00:37:34.561 "traddr": "10.0.0.2", 00:37:34.561 "adrfam": "ipv4", 00:37:34.561 "trsvcid": "4420", 00:37:34.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:34.561 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:34.561 "hdgst": false, 00:37:34.561 "ddgst": false 00:37:34.561 }, 00:37:34.561 "method": "bdev_nvme_attach_controller" 00:37:34.561 }' 00:37:34.561 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:37:34.561 15:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:34.561 "params": { 00:37:34.561 "name": "Nvme1", 00:37:34.561 "trtype": "tcp", 00:37:34.561 "traddr": "10.0.0.2", 00:37:34.561 "adrfam": "ipv4", 00:37:34.561 "trsvcid": "4420", 00:37:34.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:34.561 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:34.561 "hdgst": false, 00:37:34.561 "ddgst": false 00:37:34.561 }, 00:37:34.561 "method": "bdev_nvme_attach_controller" 00:37:34.561 }' 00:37:34.561 [2024-10-28 15:33:21.329242] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:37:34.561 [2024-10-28 15:33:21.329325] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:37:34.561 [2024-10-28 15:33:21.329835] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:37:34.561 [2024-10-28 15:33:21.329834] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:37:34.561 [2024-10-28 15:33:21.329832] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:37:34.561 [2024-10-28 15:33:21.329946] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-28 15:33:21.329946] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-28 15:33:21.329946] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:37:34.561 --proc-type=auto ] 00:37:34.561 --proc-type=auto ] 00:37:34.818 [2024-10-28 15:33:21.507522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:34.818 [2024-10-28 15:33:21.559477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:34.818 [2024-10-28 15:33:21.617935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:34.818 [2024-10-28 15:33:21.679292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:37:35.076 [2024-10-28 15:33:21.759954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:35.076 [2024-10-28 15:33:21.820806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:35.076 [2024-10-28 15:33:21.898366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:35.334 [2024-10-28 15:33:21.955985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:35.334 Running I/O for 1 seconds... 00:37:35.334 Running I/O for 1 seconds... 00:37:35.334 Running I/O for 1 seconds... 00:37:35.592 Running I/O for 1 seconds... 00:37:36.526 6297.00 IOPS, 24.60 MiB/s 00:37:36.526 Latency(us) 00:37:36.526 [2024-10-28T14:33:23.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:36.526 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:37:36.526 Nvme1n1 : 1.02 6314.35 24.67 0.00 0.00 20083.15 4611.79 27962.03 00:37:36.526 [2024-10-28T14:33:23.393Z] =================================================================================================================== 00:37:36.526 [2024-10-28T14:33:23.393Z] Total : 6314.35 24.67 0.00 0.00 20083.15 4611.79 27962.03 00:37:36.526 201504.00 IOPS, 787.12 MiB/s 00:37:36.526 Latency(us) 00:37:36.526 [2024-10-28T14:33:23.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:36.526 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:37:36.526 Nvme1n1 : 1.00 201122.83 785.64 0.00 0.00 632.79 291.27 1856.85 00:37:36.526 [2024-10-28T14:33:23.393Z] =================================================================================================================== 00:37:36.526 [2024-10-28T14:33:23.393Z] Total : 201122.83 785.64 0.00 0.00 632.79 291.27 1856.85 00:37:36.526 5987.00 IOPS, 23.39 MiB/s 00:37:36.526 Latency(us) 00:37:36.526 [2024-10-28T14:33:23.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:36.526 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:37:36.526 Nvme1n1 : 1.01 6067.50 23.70 0.00 0.00 21006.02 6941.96 33010.73 00:37:36.526 [2024-10-28T14:33:23.393Z] =================================================================================================================== 00:37:36.526 [2024-10-28T14:33:23.393Z] Total : 6067.50 23.70 0.00 0.00 21006.02 6941.96 33010.73 00:37:36.526 8683.00 IOPS, 33.92 MiB/s 00:37:36.526 Latency(us) 00:37:36.526 [2024-10-28T14:33:23.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:36.526 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:37:36.526 Nvme1n1 : 1.01 8744.04 34.16 0.00 0.00 14573.25 1953.94 20291.89 00:37:36.526 [2024-10-28T14:33:23.393Z] =================================================================================================================== 00:37:36.526 [2024-10-28T14:33:23.393Z] Total : 8744.04 34.16 0.00 0.00 14573.25 1953.94 20291.89 00:37:36.785 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3352592 00:37:36.785 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3352594 00:37:36.785 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3352596 00:37:36.785 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:36.785 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.785 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:36.785 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.785 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:37:36.785 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:37:36.785 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:36.785 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:37:36.785 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:36.785 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:37:36.785 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:36.785 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:36.785 rmmod nvme_tcp 00:37:36.785 rmmod nvme_fabrics 00:37:36.785 rmmod nvme_keyring 00:37:36.785 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:36.785 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:37:36.785 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:37:36.785 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3352310 ']' 00:37:36.785 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3352310 00:37:36.785 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 3352310 ']' 00:37:36.785 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 3352310 00:37:36.785 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:37:36.785 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:36.785 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3352310 00:37:36.785 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:36.785 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:36.785 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3352310' 00:37:36.785 killing process with pid 3352310 00:37:36.785 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 3352310 00:37:36.785 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 3352310 00:37:37.356 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:37.356 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:37.356 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:37.356 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:37:37.356 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:37:37.356 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:37.356 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:37:37.356 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:37.356 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:37.356 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:37.356 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:37.356 15:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:39.261 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:39.261 00:37:39.261 real 0m9.845s 00:37:39.261 user 0m16.972s 00:37:39.261 sys 0m5.172s 00:37:39.261 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:39.261 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:39.261 ************************************ 00:37:39.261 END TEST nvmf_bdev_io_wait 00:37:39.261 ************************************ 00:37:39.261 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:37:39.261 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:39.261 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:39.261 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:39.261 ************************************ 00:37:39.261 START TEST nvmf_queue_depth 00:37:39.261 ************************************ 00:37:39.261 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:37:39.521 * Looking for test storage... 00:37:39.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:39.521 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:37:39.521 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1689 -- # lcov --version 00:37:39.521 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:37:39.521 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:37:39.521 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:39.521 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:39.521 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:39.521 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:37:39.521 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:37:39.521 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:37:39.521 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:37:39.521 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:37:39.521 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:37:39.521 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:37:39.521 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:39.521 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:37:39.521 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:37:39.521 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:39.521 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:39.521 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:37:39.521 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:37:39.521 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:39.521 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:37:39.521 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:37:39.521 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:37:39.521 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:37:39.521 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:39.521 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:37:39.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.781 --rc genhtml_branch_coverage=1 00:37:39.781 --rc genhtml_function_coverage=1 00:37:39.781 --rc genhtml_legend=1 00:37:39.781 --rc geninfo_all_blocks=1 00:37:39.781 --rc geninfo_unexecuted_blocks=1 00:37:39.781 00:37:39.781 ' 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:37:39.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.781 --rc genhtml_branch_coverage=1 00:37:39.781 --rc genhtml_function_coverage=1 00:37:39.781 --rc genhtml_legend=1 00:37:39.781 --rc geninfo_all_blocks=1 00:37:39.781 --rc geninfo_unexecuted_blocks=1 00:37:39.781 00:37:39.781 ' 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:37:39.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.781 --rc genhtml_branch_coverage=1 00:37:39.781 --rc genhtml_function_coverage=1 00:37:39.781 --rc genhtml_legend=1 00:37:39.781 --rc geninfo_all_blocks=1 00:37:39.781 --rc geninfo_unexecuted_blocks=1 00:37:39.781 00:37:39.781 ' 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:37:39.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.781 --rc genhtml_branch_coverage=1 00:37:39.781 --rc genhtml_function_coverage=1 00:37:39.781 --rc genhtml_legend=1 00:37:39.781 --rc geninfo_all_blocks=1 00:37:39.781 --rc geninfo_unexecuted_blocks=1 00:37:39.781 00:37:39.781 ' 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:37:39.781 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:37:39.782 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:39.782 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:37:39.782 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:39.782 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:39.782 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:39.782 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:39.782 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:39.782 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:39.782 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:39.782 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:39.782 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:39.782 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:39.782 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:37:39.782 15:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:37:42.322 Found 0000:84:00.0 (0x8086 - 0x159b) 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:37:42.322 Found 0000:84:00.1 (0x8086 - 0x159b) 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:37:42.322 Found net devices under 0000:84:00.0: cvl_0_0 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:37:42.322 Found net devices under 0000:84:00.1: cvl_0_1 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:42.322 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:42.323 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:42.323 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:42.323 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:42.323 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:42.323 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:42.323 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:42.323 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:42.323 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:42.323 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:42.323 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:42.323 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:42.323 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:42.323 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:42.584 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:42.584 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:42.584 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:42.584 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:42.584 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:42.584 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:42.584 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:42.584 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:42.584 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:42.584 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:37:42.584 00:37:42.584 --- 10.0.0.2 ping statistics --- 00:37:42.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:42.584 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:37:42.584 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:42.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:42.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:37:42.584 00:37:42.584 --- 10.0.0.1 ping statistics --- 00:37:42.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:42.584 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:37:42.584 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:42.584 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:37:42.584 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:42.584 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:42.584 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:42.584 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:42.584 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:42.584 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:42.584 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:42.584 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:37:42.584 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:42.584 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:42.584 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:42.584 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3354860 00:37:42.584 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:37:42.584 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3354860 00:37:42.584 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3354860 ']' 00:37:42.584 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:42.584 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:42.584 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:42.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:42.584 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:42.584 15:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:42.584 [2024-10-28 15:33:29.409861] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:42.584 [2024-10-28 15:33:29.411168] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:37:42.584 [2024-10-28 15:33:29.411242] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:42.845 [2024-10-28 15:33:29.553006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:42.845 [2024-10-28 15:33:29.670531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:42.845 [2024-10-28 15:33:29.670638] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:42.845 [2024-10-28 15:33:29.670693] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:42.845 [2024-10-28 15:33:29.670724] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:42.845 [2024-10-28 15:33:29.670750] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:42.845 [2024-10-28 15:33:29.672048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:43.105 [2024-10-28 15:33:29.838854] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:43.105 [2024-10-28 15:33:29.839379] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:43.366 [2024-10-28 15:33:30.081166] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:43.366 Malloc0 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:43.366 [2024-10-28 15:33:30.161494] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3354986 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3354986 /var/tmp/bdevperf.sock 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3354986 ']' 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:43.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:43.366 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:43.366 [2024-10-28 15:33:30.215716] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:37:43.366 [2024-10-28 15:33:30.215815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3354986 ] 00:37:43.627 [2024-10-28 15:33:30.344725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:43.627 [2024-10-28 15:33:30.462886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:44.196 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:44.196 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:37:44.196 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:44.196 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.196 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:44.196 NVMe0n1 00:37:44.196 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.196 15:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:44.456 Running I/O for 10 seconds... 00:37:46.336 3502.00 IOPS, 13.68 MiB/s [2024-10-28T14:33:34.145Z] 3640.50 IOPS, 14.22 MiB/s [2024-10-28T14:33:35.530Z] 3754.67 IOPS, 14.67 MiB/s [2024-10-28T14:33:36.473Z] 3776.50 IOPS, 14.75 MiB/s [2024-10-28T14:33:37.416Z] 3689.60 IOPS, 14.41 MiB/s [2024-10-28T14:33:38.356Z] 3754.67 IOPS, 14.67 MiB/s [2024-10-28T14:33:39.294Z] 3736.43 IOPS, 14.60 MiB/s [2024-10-28T14:33:40.234Z] 3744.00 IOPS, 14.62 MiB/s [2024-10-28T14:33:41.177Z] 3755.56 IOPS, 14.67 MiB/s [2024-10-28T14:33:41.437Z] 3788.90 IOPS, 14.80 MiB/s 00:37:54.570 Latency(us) 00:37:54.570 [2024-10-28T14:33:41.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:54.570 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:37:54.570 Verification LBA range: start 0x0 length 0x4000 00:37:54.570 NVMe0n1 : 10.23 3804.92 14.86 0.00 0.00 266895.70 50486.99 159228.21 00:37:54.570 [2024-10-28T14:33:41.437Z] =================================================================================================================== 00:37:54.570 [2024-10-28T14:33:41.437Z] Total : 3804.92 14.86 0.00 0.00 266895.70 50486.99 159228.21 00:37:54.570 { 00:37:54.570 "results": [ 00:37:54.570 { 00:37:54.570 "job": "NVMe0n1", 00:37:54.570 "core_mask": "0x1", 00:37:54.570 "workload": "verify", 00:37:54.570 "status": "finished", 00:37:54.570 "verify_range": { 00:37:54.570 "start": 0, 00:37:54.570 "length": 16384 00:37:54.570 }, 00:37:54.570 "queue_depth": 1024, 00:37:54.570 "io_size": 4096, 00:37:54.570 "runtime": 10.226768, 00:37:54.570 "iops": 3804.916665754029, 00:37:54.570 "mibps": 14.862955725601676, 00:37:54.570 "io_failed": 0, 00:37:54.570 "io_timeout": 0, 00:37:54.570 "avg_latency_us": 266895.699337232, 00:37:54.570 "min_latency_us": 50486.99259259259, 00:37:54.570 "max_latency_us": 159228.2074074074 00:37:54.570 } 00:37:54.570 ], 00:37:54.570 "core_count": 1 00:37:54.570 } 00:37:54.570 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3354986 00:37:54.570 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3354986 ']' 00:37:54.570 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3354986 00:37:54.570 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:37:54.570 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:54.570 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3354986 00:37:54.570 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:54.570 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:54.570 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3354986' 00:37:54.570 killing process with pid 3354986 00:37:54.570 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3354986 00:37:54.570 Received shutdown signal, test time was about 10.000000 seconds 00:37:54.570 00:37:54.570 Latency(us) 00:37:54.570 [2024-10-28T14:33:41.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:54.570 [2024-10-28T14:33:41.437Z] =================================================================================================================== 00:37:54.570 [2024-10-28T14:33:41.437Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:54.570 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3354986 00:37:55.139 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:37:55.139 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:37:55.139 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:55.140 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:37:55.140 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:55.140 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:37:55.140 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:55.140 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:55.140 rmmod nvme_tcp 00:37:55.140 rmmod nvme_fabrics 00:37:55.140 rmmod nvme_keyring 00:37:55.140 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:55.140 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:37:55.140 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:37:55.140 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3354860 ']' 00:37:55.140 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3354860 00:37:55.140 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3354860 ']' 00:37:55.140 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3354860 00:37:55.140 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:37:55.140 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:55.140 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3354860 00:37:55.140 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:55.140 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:55.140 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3354860' 00:37:55.140 killing process with pid 3354860 00:37:55.140 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3354860 00:37:55.140 15:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3354860 00:37:55.399 15:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:55.399 15:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:55.399 15:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:55.399 15:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:37:55.399 15:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:37:55.399 15:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:55.400 15:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:37:55.660 15:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:55.660 15:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:55.660 15:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:55.660 15:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:55.660 15:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:57.613 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:57.613 00:37:57.613 real 0m18.212s 00:37:57.613 user 0m24.252s 00:37:57.613 sys 0m4.582s 00:37:57.613 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:57.613 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:57.613 ************************************ 00:37:57.613 END TEST nvmf_queue_depth 00:37:57.613 ************************************ 00:37:57.613 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:37:57.613 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:57.613 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:57.613 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:57.613 ************************************ 00:37:57.613 START TEST nvmf_target_multipath 00:37:57.613 ************************************ 00:37:57.613 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:37:57.613 * Looking for test storage... 00:37:57.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:57.613 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:37:57.614 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1689 -- # lcov --version 00:37:57.614 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:37:57.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:57.899 --rc genhtml_branch_coverage=1 00:37:57.899 --rc genhtml_function_coverage=1 00:37:57.899 --rc genhtml_legend=1 00:37:57.899 --rc geninfo_all_blocks=1 00:37:57.899 --rc geninfo_unexecuted_blocks=1 00:37:57.899 00:37:57.899 ' 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:37:57.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:57.899 --rc genhtml_branch_coverage=1 00:37:57.899 --rc genhtml_function_coverage=1 00:37:57.899 --rc genhtml_legend=1 00:37:57.899 --rc geninfo_all_blocks=1 00:37:57.899 --rc geninfo_unexecuted_blocks=1 00:37:57.899 00:37:57.899 ' 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:37:57.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:57.899 --rc genhtml_branch_coverage=1 00:37:57.899 --rc genhtml_function_coverage=1 00:37:57.899 --rc genhtml_legend=1 00:37:57.899 --rc geninfo_all_blocks=1 00:37:57.899 --rc geninfo_unexecuted_blocks=1 00:37:57.899 00:37:57.899 ' 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:37:57.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:57.899 --rc genhtml_branch_coverage=1 00:37:57.899 --rc genhtml_function_coverage=1 00:37:57.899 --rc genhtml_legend=1 00:37:57.899 --rc geninfo_all_blocks=1 00:37:57.899 --rc geninfo_unexecuted_blocks=1 00:37:57.899 00:37:57.899 ' 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:37:57.899 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:37:57.900 15:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:00.438 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:00.438 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:38:00.438 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:38:00.439 Found 0000:84:00.0 (0x8086 - 0x159b) 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:38:00.439 Found 0000:84:00.1 (0x8086 - 0x159b) 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:38:00.439 Found net devices under 0000:84:00.0: cvl_0_0 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:38:00.439 Found net devices under 0000:84:00.1: cvl_0_1 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:00.439 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:00.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:00.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:38:00.701 00:38:00.701 --- 10.0.0.2 ping statistics --- 00:38:00.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:00.701 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:00.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:00.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:38:00.701 00:38:00.701 --- 10.0.0.1 ping statistics --- 00:38:00.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:00.701 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:38:00.701 only one NIC for nvmf test 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:00.701 rmmod nvme_tcp 00:38:00.701 rmmod nvme_fabrics 00:38:00.701 rmmod nvme_keyring 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:00.701 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:00.702 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:00.702 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:00.702 15:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:03.245 00:38:03.245 real 0m5.253s 00:38:03.245 user 0m1.087s 00:38:03.245 sys 0m2.173s 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:03.245 ************************************ 00:38:03.245 END TEST nvmf_target_multipath 00:38:03.245 ************************************ 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:03.245 ************************************ 00:38:03.245 START TEST nvmf_zcopy 00:38:03.245 ************************************ 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:03.245 * Looking for test storage... 00:38:03.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1689 -- # lcov --version 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:03.245 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:38:03.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.246 --rc genhtml_branch_coverage=1 00:38:03.246 --rc genhtml_function_coverage=1 00:38:03.246 --rc genhtml_legend=1 00:38:03.246 --rc geninfo_all_blocks=1 00:38:03.246 --rc geninfo_unexecuted_blocks=1 00:38:03.246 00:38:03.246 ' 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:38:03.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.246 --rc genhtml_branch_coverage=1 00:38:03.246 --rc genhtml_function_coverage=1 00:38:03.246 --rc genhtml_legend=1 00:38:03.246 --rc geninfo_all_blocks=1 00:38:03.246 --rc geninfo_unexecuted_blocks=1 00:38:03.246 00:38:03.246 ' 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:38:03.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.246 --rc genhtml_branch_coverage=1 00:38:03.246 --rc genhtml_function_coverage=1 00:38:03.246 --rc genhtml_legend=1 00:38:03.246 --rc geninfo_all_blocks=1 00:38:03.246 --rc geninfo_unexecuted_blocks=1 00:38:03.246 00:38:03.246 ' 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:38:03.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.246 --rc genhtml_branch_coverage=1 00:38:03.246 --rc genhtml_function_coverage=1 00:38:03.246 --rc genhtml_legend=1 00:38:03.246 --rc geninfo_all_blocks=1 00:38:03.246 --rc geninfo_unexecuted_blocks=1 00:38:03.246 00:38:03.246 ' 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:38:03.246 15:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:38:06.538 Found 0000:84:00.0 (0x8086 - 0x159b) 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:38:06.538 Found 0000:84:00.1 (0x8086 - 0x159b) 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:38:06.538 Found net devices under 0000:84:00.0: cvl_0_0 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:38:06.538 Found net devices under 0000:84:00.1: cvl_0_1 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:06.538 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:06.539 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:06.539 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:06.539 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:06.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:06.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:38:06.539 00:38:06.539 --- 10.0.0.2 ping statistics --- 00:38:06.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:06.539 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:38:06.539 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:06.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:06.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:38:06.539 00:38:06.539 --- 10.0.0.1 ping statistics --- 00:38:06.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:06.539 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:38:06.539 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:06.539 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:38:06.539 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:06.539 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:06.539 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:06.539 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:06.539 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:06.539 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:06.539 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:06.539 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:38:06.539 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:06.539 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:06.539 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:06.539 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3360449 00:38:06.539 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3360449 00:38:06.539 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 3360449 ']' 00:38:06.539 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:06.539 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:06.539 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:06.539 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:06.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:06.539 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:06.539 15:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:06.539 [2024-10-28 15:33:53.057850] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:06.539 [2024-10-28 15:33:53.060378] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:38:06.539 [2024-10-28 15:33:53.060495] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:06.539 [2024-10-28 15:33:53.240902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:06.539 [2024-10-28 15:33:53.359289] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:06.539 [2024-10-28 15:33:53.359401] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:06.539 [2024-10-28 15:33:53.359437] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:06.539 [2024-10-28 15:33:53.359467] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:06.539 [2024-10-28 15:33:53.359493] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:06.539 [2024-10-28 15:33:53.360869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:06.797 [2024-10-28 15:33:53.537009] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:06.797 [2024-10-28 15:33:53.537596] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:06.797 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:06.797 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:38:06.797 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:06.797 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:06.797 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:07.057 [2024-10-28 15:33:53.686641] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:07.057 [2024-10-28 15:33:53.714457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:07.057 malloc0 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:07.057 { 00:38:07.057 "params": { 00:38:07.057 "name": "Nvme$subsystem", 00:38:07.057 "trtype": "$TEST_TRANSPORT", 00:38:07.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:07.057 "adrfam": "ipv4", 00:38:07.057 "trsvcid": "$NVMF_PORT", 00:38:07.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:07.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:07.057 "hdgst": ${hdgst:-false}, 00:38:07.057 "ddgst": ${ddgst:-false} 00:38:07.057 }, 00:38:07.057 "method": "bdev_nvme_attach_controller" 00:38:07.057 } 00:38:07.057 EOF 00:38:07.057 )") 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:38:07.057 15:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:07.057 "params": { 00:38:07.057 "name": "Nvme1", 00:38:07.057 "trtype": "tcp", 00:38:07.057 "traddr": "10.0.0.2", 00:38:07.057 "adrfam": "ipv4", 00:38:07.057 "trsvcid": "4420", 00:38:07.057 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:07.057 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:07.057 "hdgst": false, 00:38:07.057 "ddgst": false 00:38:07.057 }, 00:38:07.057 "method": "bdev_nvme_attach_controller" 00:38:07.057 }' 00:38:07.057 [2024-10-28 15:33:53.876457] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:38:07.057 [2024-10-28 15:33:53.876642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3360480 ] 00:38:07.317 [2024-10-28 15:33:54.026524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:07.317 [2024-10-28 15:33:54.124373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:07.576 Running I/O for 10 seconds... 00:38:09.887 2600.00 IOPS, 20.31 MiB/s [2024-10-28T14:33:57.690Z] 2546.50 IOPS, 19.89 MiB/s [2024-10-28T14:33:58.627Z] 3014.67 IOPS, 23.55 MiB/s [2024-10-28T14:33:59.562Z] 3184.25 IOPS, 24.88 MiB/s [2024-10-28T14:34:00.496Z] 3124.80 IOPS, 24.41 MiB/s [2024-10-28T14:34:01.877Z] 3299.50 IOPS, 25.78 MiB/s [2024-10-28T14:34:02.448Z] 3201.14 IOPS, 25.01 MiB/s [2024-10-28T14:34:03.832Z] 3111.25 IOPS, 24.31 MiB/s [2024-10-28T14:34:04.774Z] 3025.89 IOPS, 23.64 MiB/s [2024-10-28T14:34:04.774Z] 2960.00 IOPS, 23.12 MiB/s 00:38:17.907 Latency(us) 00:38:17.907 [2024-10-28T14:34:04.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:17.907 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:38:17.907 Verification LBA range: start 0x0 length 0x1000 00:38:17.907 Nvme1n1 : 10.09 2946.58 23.02 0.00 0.00 43123.69 7136.14 57477.50 00:38:17.907 [2024-10-28T14:34:04.774Z] =================================================================================================================== 00:38:17.907 [2024-10-28T14:34:04.774Z] Total : 2946.58 23.02 0.00 0.00 43123.69 7136.14 57477.50 00:38:17.907 15:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3361779 00:38:17.907 15:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:38:17.907 15:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:17.907 15:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:38:17.907 15:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:38:17.907 15:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:38:17.907 15:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:38:18.165 15:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:18.165 15:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:18.165 { 00:38:18.165 "params": { 00:38:18.165 "name": "Nvme$subsystem", 00:38:18.165 "trtype": "$TEST_TRANSPORT", 00:38:18.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:18.165 "adrfam": "ipv4", 00:38:18.165 "trsvcid": "$NVMF_PORT", 00:38:18.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:18.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:18.165 "hdgst": ${hdgst:-false}, 00:38:18.165 "ddgst": ${ddgst:-false} 00:38:18.165 }, 00:38:18.165 "method": "bdev_nvme_attach_controller" 00:38:18.165 } 00:38:18.165 EOF 00:38:18.165 )") 00:38:18.165 [2024-10-28 15:34:04.773824] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.165 [2024-10-28 15:34:04.773871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.165 15:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:38:18.165 15:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:38:18.165 15:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:38:18.165 15:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:18.165 "params": { 00:38:18.165 "name": "Nvme1", 00:38:18.165 "trtype": "tcp", 00:38:18.165 "traddr": "10.0.0.2", 00:38:18.165 "adrfam": "ipv4", 00:38:18.165 "trsvcid": "4420", 00:38:18.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:18.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:18.165 "hdgst": false, 00:38:18.165 "ddgst": false 00:38:18.165 }, 00:38:18.165 "method": "bdev_nvme_attach_controller" 00:38:18.165 }' 00:38:18.165 [2024-10-28 15:34:04.781735] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.165 [2024-10-28 15:34:04.781759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.165 [2024-10-28 15:34:04.789735] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.165 [2024-10-28 15:34:04.789757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.165 [2024-10-28 15:34:04.797734] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.165 [2024-10-28 15:34:04.797756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.165 [2024-10-28 15:34:04.805745] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.165 [2024-10-28 15:34:04.805766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.165 [2024-10-28 15:34:04.813743] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.165 [2024-10-28 15:34:04.813764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.165 [2024-10-28 15:34:04.821727] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.165 [2024-10-28 15:34:04.821748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.165 [2024-10-28 15:34:04.822458] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:38:18.165 [2024-10-28 15:34:04.822546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3361779 ] 00:38:18.165 [2024-10-28 15:34:04.829743] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.165 [2024-10-28 15:34:04.829764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.165 [2024-10-28 15:34:04.837729] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.165 [2024-10-28 15:34:04.837750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.165 [2024-10-28 15:34:04.845726] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.165 [2024-10-28 15:34:04.845747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.165 [2024-10-28 15:34:04.853727] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.165 [2024-10-28 15:34:04.853747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.165 [2024-10-28 15:34:04.861743] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.165 [2024-10-28 15:34:04.861763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.165 [2024-10-28 15:34:04.869730] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.165 [2024-10-28 15:34:04.869750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.165 [2024-10-28 15:34:04.877729] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.165 [2024-10-28 15:34:04.877750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.165 [2024-10-28 15:34:04.885730] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.165 [2024-10-28 15:34:04.885750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.165 [2024-10-28 15:34:04.893729] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.165 [2024-10-28 15:34:04.893749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.166 [2024-10-28 15:34:04.899310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:18.166 [2024-10-28 15:34:04.901748] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.166 [2024-10-28 15:34:04.901770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.166 [2024-10-28 15:34:04.909784] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.166 [2024-10-28 15:34:04.909834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.166 [2024-10-28 15:34:04.917775] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.166 [2024-10-28 15:34:04.917812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.166 [2024-10-28 15:34:04.925730] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.166 [2024-10-28 15:34:04.925751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.166 [2024-10-28 15:34:04.933728] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.166 [2024-10-28 15:34:04.933748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.166 [2024-10-28 15:34:04.941745] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.166 [2024-10-28 15:34:04.941766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.166 [2024-10-28 15:34:04.949741] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.166 [2024-10-28 15:34:04.949761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.166 [2024-10-28 15:34:04.957744] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.166 [2024-10-28 15:34:04.957764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.166 [2024-10-28 15:34:04.962537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:18.166 [2024-10-28 15:34:04.965745] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.166 [2024-10-28 15:34:04.965766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.166 [2024-10-28 15:34:04.973729] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.166 [2024-10-28 15:34:04.973750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.166 [2024-10-28 15:34:04.981785] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.166 [2024-10-28 15:34:04.981824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.166 [2024-10-28 15:34:04.989794] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.166 [2024-10-28 15:34:04.989833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.166 [2024-10-28 15:34:04.997791] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.166 [2024-10-28 15:34:04.997832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.166 [2024-10-28 15:34:05.005778] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.166 [2024-10-28 15:34:05.005819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.166 [2024-10-28 15:34:05.013781] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.166 [2024-10-28 15:34:05.013823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.166 [2024-10-28 15:34:05.021773] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.166 [2024-10-28 15:34:05.021812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.166 [2024-10-28 15:34:05.029782] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.166 [2024-10-28 15:34:05.029818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.425 [2024-10-28 15:34:05.037764] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.425 [2024-10-28 15:34:05.037792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.425 [2024-10-28 15:34:05.045797] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.425 [2024-10-28 15:34:05.045839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.425 [2024-10-28 15:34:05.053795] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.425 [2024-10-28 15:34:05.053833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.425 [2024-10-28 15:34:05.061759] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.425 [2024-10-28 15:34:05.061792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.425 [2024-10-28 15:34:05.069731] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.425 [2024-10-28 15:34:05.069752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.425 [2024-10-28 15:34:05.077742] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.425 [2024-10-28 15:34:05.077762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.425 [2024-10-28 15:34:05.085752] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.425 [2024-10-28 15:34:05.085778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.425 [2024-10-28 15:34:05.093735] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.425 [2024-10-28 15:34:05.093759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.425 [2024-10-28 15:34:05.101747] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.425 [2024-10-28 15:34:05.101770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.425 [2024-10-28 15:34:05.109738] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.425 [2024-10-28 15:34:05.109764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.425 [2024-10-28 15:34:05.117732] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.425 [2024-10-28 15:34:05.117754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.425 [2024-10-28 15:34:05.125727] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.425 [2024-10-28 15:34:05.125747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.425 [2024-10-28 15:34:05.133729] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.425 [2024-10-28 15:34:05.133749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.425 [2024-10-28 15:34:05.141743] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.425 [2024-10-28 15:34:05.141764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.425 [2024-10-28 15:34:05.149752] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.425 [2024-10-28 15:34:05.149775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.425 [2024-10-28 15:34:05.157748] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.425 [2024-10-28 15:34:05.157771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.425 [2024-10-28 15:34:05.165744] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.425 [2024-10-28 15:34:05.165767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.425 [2024-10-28 15:34:05.173731] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.425 [2024-10-28 15:34:05.173752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.425 [2024-10-28 15:34:05.181733] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.425 [2024-10-28 15:34:05.181754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.425 [2024-10-28 15:34:05.189729] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.425 [2024-10-28 15:34:05.189750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.425 [2024-10-28 15:34:05.197742] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.425 [2024-10-28 15:34:05.197763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.425 [2024-10-28 15:34:05.205734] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.425 [2024-10-28 15:34:05.205758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.425 [2024-10-28 15:34:05.213729] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.425 [2024-10-28 15:34:05.213750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.425 [2024-10-28 15:34:05.221731] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.425 [2024-10-28 15:34:05.221752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.425 [2024-10-28 15:34:05.229729] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.425 [2024-10-28 15:34:05.229749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.425 [2024-10-28 15:34:05.237729] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.425 [2024-10-28 15:34:05.237750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.425 [2024-10-28 15:34:05.245732] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.425 [2024-10-28 15:34:05.245754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.425 [2024-10-28 15:34:05.253731] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.425 [2024-10-28 15:34:05.253754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.425 [2024-10-28 15:34:05.261729] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.425 [2024-10-28 15:34:05.261751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.425 [2024-10-28 15:34:05.269742] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.425 [2024-10-28 15:34:05.269763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.425 [2024-10-28 15:34:05.277729] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.425 [2024-10-28 15:34:05.277749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.426 [2024-10-28 15:34:05.285734] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.426 [2024-10-28 15:34:05.285755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.684 [2024-10-28 15:34:05.293750] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.684 [2024-10-28 15:34:05.293774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.684 [2024-10-28 15:34:05.301741] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.684 [2024-10-28 15:34:05.301768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.684 [2024-10-28 15:34:05.309749] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.684 [2024-10-28 15:34:05.309773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.684 Running I/O for 5 seconds... 00:38:18.684 [2024-10-28 15:34:05.325420] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.684 [2024-10-28 15:34:05.325446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.684 [2024-10-28 15:34:05.336138] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.684 [2024-10-28 15:34:05.336163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.684 [2024-10-28 15:34:05.349578] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.684 [2024-10-28 15:34:05.349603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.684 [2024-10-28 15:34:05.358990] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.684 [2024-10-28 15:34:05.359030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.684 [2024-10-28 15:34:05.370848] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.684 [2024-10-28 15:34:05.370875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.684 [2024-10-28 15:34:05.381619] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.684 [2024-10-28 15:34:05.381681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.684 [2024-10-28 15:34:05.392410] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.684 [2024-10-28 15:34:05.392435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.684 [2024-10-28 15:34:05.405371] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.684 [2024-10-28 15:34:05.405397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.684 [2024-10-28 15:34:05.415259] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.684 [2024-10-28 15:34:05.415284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.684 [2024-10-28 15:34:05.427217] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.684 [2024-10-28 15:34:05.427243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.684 [2024-10-28 15:34:05.438447] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.684 [2024-10-28 15:34:05.438473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.684 [2024-10-28 15:34:05.449550] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.684 [2024-10-28 15:34:05.449575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.684 [2024-10-28 15:34:05.462762] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.684 [2024-10-28 15:34:05.462788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.684 [2024-10-28 15:34:05.472187] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.684 [2024-10-28 15:34:05.472212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.684 [2024-10-28 15:34:05.484379] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.684 [2024-10-28 15:34:05.484410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.684 [2024-10-28 15:34:05.499912] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.684 [2024-10-28 15:34:05.499953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.684 [2024-10-28 15:34:05.510466] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.684 [2024-10-28 15:34:05.510491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.684 [2024-10-28 15:34:05.521311] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.684 [2024-10-28 15:34:05.521336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.684 [2024-10-28 15:34:05.533083] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.684 [2024-10-28 15:34:05.533107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.684 [2024-10-28 15:34:05.546406] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.684 [2024-10-28 15:34:05.546432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.943 [2024-10-28 15:34:05.556191] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.943 [2024-10-28 15:34:05.556216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.943 [2024-10-28 15:34:05.568556] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.943 [2024-10-28 15:34:05.568581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.943 [2024-10-28 15:34:05.581489] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.943 [2024-10-28 15:34:05.581514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.943 [2024-10-28 15:34:05.590995] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.943 [2024-10-28 15:34:05.591020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.943 [2024-10-28 15:34:05.603902] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.943 [2024-10-28 15:34:05.603952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.943 [2024-10-28 15:34:05.615062] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.943 [2024-10-28 15:34:05.615086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.943 [2024-10-28 15:34:05.626850] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.943 [2024-10-28 15:34:05.626876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.943 [2024-10-28 15:34:05.637784] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.943 [2024-10-28 15:34:05.637810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.943 [2024-10-28 15:34:05.648626] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.943 [2024-10-28 15:34:05.648675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.943 [2024-10-28 15:34:05.662766] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.943 [2024-10-28 15:34:05.662794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.943 [2024-10-28 15:34:05.672221] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.943 [2024-10-28 15:34:05.672246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.943 [2024-10-28 15:34:05.684286] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.943 [2024-10-28 15:34:05.684311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.943 [2024-10-28 15:34:05.695539] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.943 [2024-10-28 15:34:05.695564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.943 [2024-10-28 15:34:05.706771] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.943 [2024-10-28 15:34:05.706797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.943 [2024-10-28 15:34:05.717614] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.943 [2024-10-28 15:34:05.717665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.943 [2024-10-28 15:34:05.728686] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.943 [2024-10-28 15:34:05.728728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.943 [2024-10-28 15:34:05.742087] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.944 [2024-10-28 15:34:05.742112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.944 [2024-10-28 15:34:05.751401] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.944 [2024-10-28 15:34:05.751427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.944 [2024-10-28 15:34:05.764215] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.944 [2024-10-28 15:34:05.764241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.944 [2024-10-28 15:34:05.775415] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.944 [2024-10-28 15:34:05.775440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.944 [2024-10-28 15:34:05.786673] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.944 [2024-10-28 15:34:05.786711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:18.944 [2024-10-28 15:34:05.797877] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:18.944 [2024-10-28 15:34:05.797904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.204 [2024-10-28 15:34:05.812317] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.204 [2024-10-28 15:34:05.812345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.204 [2024-10-28 15:34:05.833253] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.204 [2024-10-28 15:34:05.833339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.204 [2024-10-28 15:34:05.859136] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.204 [2024-10-28 15:34:05.859205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.204 [2024-10-28 15:34:05.887426] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.204 [2024-10-28 15:34:05.887494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.204 [2024-10-28 15:34:05.915763] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.204 [2024-10-28 15:34:05.915832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.204 [2024-10-28 15:34:05.942889] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.204 [2024-10-28 15:34:05.942955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.204 [2024-10-28 15:34:05.969901] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.204 [2024-10-28 15:34:05.969968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.204 [2024-10-28 15:34:05.997534] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.204 [2024-10-28 15:34:05.997604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.204 [2024-10-28 15:34:06.025594] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.204 [2024-10-28 15:34:06.025679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.204 [2024-10-28 15:34:06.051957] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.204 [2024-10-28 15:34:06.052024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.464 [2024-10-28 15:34:06.075514] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.464 [2024-10-28 15:34:06.075582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.464 [2024-10-28 15:34:06.103576] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.464 [2024-10-28 15:34:06.103644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.464 [2024-10-28 15:34:06.132166] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.464 [2024-10-28 15:34:06.132233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.464 [2024-10-28 15:34:06.158838] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.464 [2024-10-28 15:34:06.158903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.464 [2024-10-28 15:34:06.185716] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.464 [2024-10-28 15:34:06.185783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.464 [2024-10-28 15:34:06.212639] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.464 [2024-10-28 15:34:06.212722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.464 [2024-10-28 15:34:06.238913] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.464 [2024-10-28 15:34:06.238979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.464 [2024-10-28 15:34:06.267971] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.464 [2024-10-28 15:34:06.268037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.464 [2024-10-28 15:34:06.296620] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.464 [2024-10-28 15:34:06.296705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.464 8010.00 IOPS, 62.58 MiB/s [2024-10-28T14:34:06.331Z] [2024-10-28 15:34:06.323744] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.464 [2024-10-28 15:34:06.323811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.724 [2024-10-28 15:34:06.345912] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.724 [2024-10-28 15:34:06.345980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.724 [2024-10-28 15:34:06.373881] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.724 [2024-10-28 15:34:06.373949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.724 [2024-10-28 15:34:06.402269] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.724 [2024-10-28 15:34:06.402337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.724 [2024-10-28 15:34:06.426117] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.724 [2024-10-28 15:34:06.426184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.724 [2024-10-28 15:34:06.452990] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.724 [2024-10-28 15:34:06.453057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.724 [2024-10-28 15:34:06.480774] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.724 [2024-10-28 15:34:06.480842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.724 [2024-10-28 15:34:06.507937] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.724 [2024-10-28 15:34:06.508004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.724 [2024-10-28 15:34:06.536412] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.724 [2024-10-28 15:34:06.536479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.724 [2024-10-28 15:34:06.563940] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.724 [2024-10-28 15:34:06.564006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.983 [2024-10-28 15:34:06.591952] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.984 [2024-10-28 15:34:06.592020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.984 [2024-10-28 15:34:06.612953] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.984 [2024-10-28 15:34:06.613020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.984 [2024-10-28 15:34:06.640138] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.984 [2024-10-28 15:34:06.640205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.984 [2024-10-28 15:34:06.667813] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.984 [2024-10-28 15:34:06.667883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.984 [2024-10-28 15:34:06.694648] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.984 [2024-10-28 15:34:06.694734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.984 [2024-10-28 15:34:06.722446] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.984 [2024-10-28 15:34:06.722513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.984 [2024-10-28 15:34:06.746928] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.984 [2024-10-28 15:34:06.746994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.984 [2024-10-28 15:34:06.772077] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.984 [2024-10-28 15:34:06.772145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.984 [2024-10-28 15:34:06.802059] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.984 [2024-10-28 15:34:06.802126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.984 [2024-10-28 15:34:06.823707] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.984 [2024-10-28 15:34:06.823738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:19.984 [2024-10-28 15:34:06.847451] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:19.984 [2024-10-28 15:34:06.847518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.242 [2024-10-28 15:34:06.864529] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.242 [2024-10-28 15:34:06.864561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.242 [2024-10-28 15:34:06.876122] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.242 [2024-10-28 15:34:06.876152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.242 [2024-10-28 15:34:06.887598] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.242 [2024-10-28 15:34:06.887629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.242 [2024-10-28 15:34:06.901115] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.242 [2024-10-28 15:34:06.901146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.242 [2024-10-28 15:34:06.913455] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.242 [2024-10-28 15:34:06.913486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.242 [2024-10-28 15:34:06.925459] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.242 [2024-10-28 15:34:06.925492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.242 [2024-10-28 15:34:06.937270] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.242 [2024-10-28 15:34:06.937296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.242 [2024-10-28 15:34:06.951016] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.242 [2024-10-28 15:34:06.951041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.242 [2024-10-28 15:34:06.960496] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.242 [2024-10-28 15:34:06.960521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.242 [2024-10-28 15:34:06.972587] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.242 [2024-10-28 15:34:06.972622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.242 [2024-10-28 15:34:06.987263] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.242 [2024-10-28 15:34:06.987288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.242 [2024-10-28 15:34:06.996808] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.242 [2024-10-28 15:34:06.996835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.242 [2024-10-28 15:34:07.009145] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.242 [2024-10-28 15:34:07.009171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.242 [2024-10-28 15:34:07.021956] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.242 [2024-10-28 15:34:07.021982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.242 [2024-10-28 15:34:07.031452] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.242 [2024-10-28 15:34:07.031477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.242 [2024-10-28 15:34:07.043349] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.242 [2024-10-28 15:34:07.043374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.242 [2024-10-28 15:34:07.054687] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.242 [2024-10-28 15:34:07.054729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.242 [2024-10-28 15:34:07.065793] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.242 [2024-10-28 15:34:07.065818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.242 [2024-10-28 15:34:07.076909] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.242 [2024-10-28 15:34:07.076936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.242 [2024-10-28 15:34:07.090330] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.242 [2024-10-28 15:34:07.090355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.242 [2024-10-28 15:34:07.100391] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.242 [2024-10-28 15:34:07.100416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.500 [2024-10-28 15:34:07.113398] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.500 [2024-10-28 15:34:07.113423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.500 [2024-10-28 15:34:07.124436] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.500 [2024-10-28 15:34:07.124460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.500 [2024-10-28 15:34:07.139745] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.500 [2024-10-28 15:34:07.139771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.500 [2024-10-28 15:34:07.149569] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.500 [2024-10-28 15:34:07.149594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.500 [2024-10-28 15:34:07.161659] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.500 [2024-10-28 15:34:07.161685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.500 [2024-10-28 15:34:07.173125] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.500 [2024-10-28 15:34:07.173150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.500 [2024-10-28 15:34:07.188447] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.500 [2024-10-28 15:34:07.188471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.500 [2024-10-28 15:34:07.198800] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.500 [2024-10-28 15:34:07.198826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.500 [2024-10-28 15:34:07.210974] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.500 [2024-10-28 15:34:07.211013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.500 [2024-10-28 15:34:07.222250] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.500 [2024-10-28 15:34:07.222282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.500 [2024-10-28 15:34:07.232689] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.500 [2024-10-28 15:34:07.232715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.500 [2024-10-28 15:34:07.244353] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.500 [2024-10-28 15:34:07.244379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.500 [2024-10-28 15:34:07.255243] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.500 [2024-10-28 15:34:07.255273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.500 [2024-10-28 15:34:07.266442] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.500 [2024-10-28 15:34:07.266475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.500 [2024-10-28 15:34:07.277628] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.500 [2024-10-28 15:34:07.277680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.500 [2024-10-28 15:34:07.288944] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.500 [2024-10-28 15:34:07.288970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.500 [2024-10-28 15:34:07.303455] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.500 [2024-10-28 15:34:07.303481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.500 [2024-10-28 15:34:07.314247] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.500 [2024-10-28 15:34:07.314272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.500 7901.00 IOPS, 61.73 MiB/s [2024-10-28T14:34:07.367Z] [2024-10-28 15:34:07.324416] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.500 [2024-10-28 15:34:07.324442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.500 [2024-10-28 15:34:07.336530] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.500 [2024-10-28 15:34:07.336554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.500 [2024-10-28 15:34:07.349826] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.500 [2024-10-28 15:34:07.349852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.500 [2024-10-28 15:34:07.359082] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.500 [2024-10-28 15:34:07.359106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.758 [2024-10-28 15:34:07.372065] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.758 [2024-10-28 15:34:07.372089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.758 [2024-10-28 15:34:07.383264] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.758 [2024-10-28 15:34:07.383288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.758 [2024-10-28 15:34:07.394319] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.758 [2024-10-28 15:34:07.394344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.758 [2024-10-28 15:34:07.405127] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.758 [2024-10-28 15:34:07.405152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.758 [2024-10-28 15:34:07.416927] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.758 [2024-10-28 15:34:07.416968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.758 [2024-10-28 15:34:07.430255] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.758 [2024-10-28 15:34:07.430280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.758 [2024-10-28 15:34:07.440687] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.758 [2024-10-28 15:34:07.440713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.758 [2024-10-28 15:34:07.452542] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.758 [2024-10-28 15:34:07.452566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.758 [2024-10-28 15:34:07.467698] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.758 [2024-10-28 15:34:07.467723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.758 [2024-10-28 15:34:07.477899] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.758 [2024-10-28 15:34:07.477925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.758 [2024-10-28 15:34:07.490285] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.758 [2024-10-28 15:34:07.490315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.759 [2024-10-28 15:34:07.500822] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.759 [2024-10-28 15:34:07.500848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.759 [2024-10-28 15:34:07.512709] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.759 [2024-10-28 15:34:07.512744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.759 [2024-10-28 15:34:07.526994] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.759 [2024-10-28 15:34:07.527018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.759 [2024-10-28 15:34:07.537116] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.759 [2024-10-28 15:34:07.537141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.759 [2024-10-28 15:34:07.549348] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.759 [2024-10-28 15:34:07.549373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.759 [2024-10-28 15:34:07.562890] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.759 [2024-10-28 15:34:07.562917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.759 [2024-10-28 15:34:07.573509] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.759 [2024-10-28 15:34:07.573533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.759 [2024-10-28 15:34:07.585755] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.759 [2024-10-28 15:34:07.585781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.759 [2024-10-28 15:34:07.596945] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.759 [2024-10-28 15:34:07.596971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.759 [2024-10-28 15:34:07.608241] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.759 [2024-10-28 15:34:07.608266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:20.759 [2024-10-28 15:34:07.619412] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:20.759 [2024-10-28 15:34:07.619436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.019 [2024-10-28 15:34:07.631756] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.019 [2024-10-28 15:34:07.631783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.019 [2024-10-28 15:34:07.643184] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.019 [2024-10-28 15:34:07.643209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.019 [2024-10-28 15:34:07.654214] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.019 [2024-10-28 15:34:07.654238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.019 [2024-10-28 15:34:07.665571] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.019 [2024-10-28 15:34:07.665596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.019 [2024-10-28 15:34:07.676619] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.019 [2024-10-28 15:34:07.676677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.019 [2024-10-28 15:34:07.692039] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.019 [2024-10-28 15:34:07.692065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.019 [2024-10-28 15:34:07.708275] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.019 [2024-10-28 15:34:07.708300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.019 [2024-10-28 15:34:07.718724] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.019 [2024-10-28 15:34:07.718750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.019 [2024-10-28 15:34:07.730550] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.019 [2024-10-28 15:34:07.730575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.019 [2024-10-28 15:34:07.741360] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.019 [2024-10-28 15:34:07.741394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.019 [2024-10-28 15:34:07.752942] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.019 [2024-10-28 15:34:07.752968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.019 [2024-10-28 15:34:07.767045] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.019 [2024-10-28 15:34:07.767070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.019 [2024-10-28 15:34:07.777158] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.019 [2024-10-28 15:34:07.777183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.019 [2024-10-28 15:34:07.789004] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.019 [2024-10-28 15:34:07.789029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.019 [2024-10-28 15:34:07.801493] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.019 [2024-10-28 15:34:07.801518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.019 [2024-10-28 15:34:07.811332] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.019 [2024-10-28 15:34:07.811356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.019 [2024-10-28 15:34:07.823810] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.019 [2024-10-28 15:34:07.823837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.019 [2024-10-28 15:34:07.835196] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.019 [2024-10-28 15:34:07.835221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.019 [2024-10-28 15:34:07.846255] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.019 [2024-10-28 15:34:07.846279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.019 [2024-10-28 15:34:07.857348] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.019 [2024-10-28 15:34:07.857373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.019 [2024-10-28 15:34:07.870719] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.019 [2024-10-28 15:34:07.870786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.279 [2024-10-28 15:34:07.897278] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.279 [2024-10-28 15:34:07.897345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.279 [2024-10-28 15:34:07.924095] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.279 [2024-10-28 15:34:07.924163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.279 [2024-10-28 15:34:07.951036] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.279 [2024-10-28 15:34:07.951103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.279 [2024-10-28 15:34:07.978028] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.279 [2024-10-28 15:34:07.978094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.279 [2024-10-28 15:34:08.006062] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.279 [2024-10-28 15:34:08.006129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.279 [2024-10-28 15:34:08.033768] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.279 [2024-10-28 15:34:08.033836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.279 [2024-10-28 15:34:08.061965] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.279 [2024-10-28 15:34:08.062034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.279 [2024-10-28 15:34:08.090546] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.279 [2024-10-28 15:34:08.090634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.279 [2024-10-28 15:34:08.112870] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.279 [2024-10-28 15:34:08.112939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.279 [2024-10-28 15:34:08.140868] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.279 [2024-10-28 15:34:08.140947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.540 [2024-10-28 15:34:08.165278] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.540 [2024-10-28 15:34:08.165309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.540 [2024-10-28 15:34:08.185133] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.540 [2024-10-28 15:34:08.185202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.540 [2024-10-28 15:34:08.207769] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.540 [2024-10-28 15:34:08.207800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.540 [2024-10-28 15:34:08.233330] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.540 [2024-10-28 15:34:08.233398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.540 [2024-10-28 15:34:08.262026] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.540 [2024-10-28 15:34:08.262094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.540 [2024-10-28 15:34:08.289345] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.540 [2024-10-28 15:34:08.289411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.540 [2024-10-28 15:34:08.312167] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.540 [2024-10-28 15:34:08.312237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.540 8056.00 IOPS, 62.94 MiB/s [2024-10-28T14:34:08.407Z] [2024-10-28 15:34:08.334630] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.540 [2024-10-28 15:34:08.334715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.540 [2024-10-28 15:34:08.361994] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.540 [2024-10-28 15:34:08.362025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.540 [2024-10-28 15:34:08.386711] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.540 [2024-10-28 15:34:08.386781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.801 [2024-10-28 15:34:08.411900] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.801 [2024-10-28 15:34:08.411968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.801 [2024-10-28 15:34:08.437865] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.801 [2024-10-28 15:34:08.437933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.801 [2024-10-28 15:34:08.465026] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.801 [2024-10-28 15:34:08.465093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.801 [2024-10-28 15:34:08.493567] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.801 [2024-10-28 15:34:08.493635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.801 [2024-10-28 15:34:08.521603] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.801 [2024-10-28 15:34:08.521690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.801 [2024-10-28 15:34:08.549236] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.801 [2024-10-28 15:34:08.549303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.801 [2024-10-28 15:34:08.576423] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.801 [2024-10-28 15:34:08.576492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.801 [2024-10-28 15:34:08.604073] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.801 [2024-10-28 15:34:08.604140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.801 [2024-10-28 15:34:08.631928] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.801 [2024-10-28 15:34:08.631995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:21.801 [2024-10-28 15:34:08.660022] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:21.801 [2024-10-28 15:34:08.660090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.061 [2024-10-28 15:34:08.688521] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.061 [2024-10-28 15:34:08.688589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.061 [2024-10-28 15:34:08.714969] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.061 [2024-10-28 15:34:08.715038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.061 [2024-10-28 15:34:08.742377] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.061 [2024-10-28 15:34:08.742444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.061 [2024-10-28 15:34:08.764542] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.061 [2024-10-28 15:34:08.764608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.061 [2024-10-28 15:34:08.792023] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.061 [2024-10-28 15:34:08.792090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.061 [2024-10-28 15:34:08.819195] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.061 [2024-10-28 15:34:08.819263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.061 [2024-10-28 15:34:08.846498] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.061 [2024-10-28 15:34:08.846565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.061 [2024-10-28 15:34:08.871095] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.061 [2024-10-28 15:34:08.871162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.061 [2024-10-28 15:34:08.897343] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.061 [2024-10-28 15:34:08.897411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.061 [2024-10-28 15:34:08.923645] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.061 [2024-10-28 15:34:08.923731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.321 [2024-10-28 15:34:08.950711] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.322 [2024-10-28 15:34:08.950778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.322 [2024-10-28 15:34:08.973622] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.322 [2024-10-28 15:34:08.973709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.322 [2024-10-28 15:34:09.002096] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.322 [2024-10-28 15:34:09.002163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.322 [2024-10-28 15:34:09.029241] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.322 [2024-10-28 15:34:09.029309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.322 [2024-10-28 15:34:09.056977] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.322 [2024-10-28 15:34:09.057046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.322 [2024-10-28 15:34:09.084491] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.322 [2024-10-28 15:34:09.084559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.322 [2024-10-28 15:34:09.112288] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.322 [2024-10-28 15:34:09.112354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.322 [2024-10-28 15:34:09.139230] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.322 [2024-10-28 15:34:09.139297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.322 [2024-10-28 15:34:09.164594] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.322 [2024-10-28 15:34:09.164680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.582 [2024-10-28 15:34:09.192262] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.582 [2024-10-28 15:34:09.192329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.582 [2024-10-28 15:34:09.220852] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.582 [2024-10-28 15:34:09.220894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.582 [2024-10-28 15:34:09.247628] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.582 [2024-10-28 15:34:09.247727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.582 [2024-10-28 15:34:09.274821] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.582 [2024-10-28 15:34:09.274888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.582 [2024-10-28 15:34:09.299832] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.582 [2024-10-28 15:34:09.299901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.582 [2024-10-28 15:34:09.325934] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.582 [2024-10-28 15:34:09.326004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.582 7216.75 IOPS, 56.38 MiB/s [2024-10-28T14:34:09.449Z] [2024-10-28 15:34:09.348130] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.582 [2024-10-28 15:34:09.348197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.582 [2024-10-28 15:34:09.371786] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.582 [2024-10-28 15:34:09.371854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.582 [2024-10-28 15:34:09.400249] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.582 [2024-10-28 15:34:09.400318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.582 [2024-10-28 15:34:09.427852] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.582 [2024-10-28 15:34:09.427919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.842 [2024-10-28 15:34:09.454510] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.842 [2024-10-28 15:34:09.454587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.842 [2024-10-28 15:34:09.483089] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.842 [2024-10-28 15:34:09.483158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.842 [2024-10-28 15:34:09.505450] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.842 [2024-10-28 15:34:09.505517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.842 [2024-10-28 15:34:09.533543] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.842 [2024-10-28 15:34:09.533611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.842 [2024-10-28 15:34:09.560487] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.842 [2024-10-28 15:34:09.560570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.842 [2024-10-28 15:34:09.587824] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.842 [2024-10-28 15:34:09.587891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.842 [2024-10-28 15:34:09.613900] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.842 [2024-10-28 15:34:09.613970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.842 [2024-10-28 15:34:09.641781] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.842 [2024-10-28 15:34:09.641848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.842 [2024-10-28 15:34:09.669102] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.842 [2024-10-28 15:34:09.669169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:22.842 [2024-10-28 15:34:09.697299] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:22.842 [2024-10-28 15:34:09.697367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.102 [2024-10-28 15:34:09.723807] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.102 [2024-10-28 15:34:09.723877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.102 [2024-10-28 15:34:09.747392] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.102 [2024-10-28 15:34:09.747458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.102 [2024-10-28 15:34:09.772008] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.102 [2024-10-28 15:34:09.772074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.102 [2024-10-28 15:34:09.799615] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.102 [2024-10-28 15:34:09.799701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.102 [2024-10-28 15:34:09.828107] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.102 [2024-10-28 15:34:09.828174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.102 [2024-10-28 15:34:09.854976] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.102 [2024-10-28 15:34:09.855043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.102 [2024-10-28 15:34:09.882067] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.102 [2024-10-28 15:34:09.882133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.102 [2024-10-28 15:34:09.911380] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.102 [2024-10-28 15:34:09.911448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.102 [2024-10-28 15:34:09.936698] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.102 [2024-10-28 15:34:09.936765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.102 [2024-10-28 15:34:09.965771] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.102 [2024-10-28 15:34:09.965838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.362 [2024-10-28 15:34:09.993520] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.362 [2024-10-28 15:34:09.993587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.362 [2024-10-28 15:34:10.010605] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.362 [2024-10-28 15:34:10.010639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.362 [2024-10-28 15:34:10.028361] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.362 [2024-10-28 15:34:10.028396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.362 [2024-10-28 15:34:10.040002] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.362 [2024-10-28 15:34:10.040043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.362 [2024-10-28 15:34:10.060125] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.362 [2024-10-28 15:34:10.060195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.362 [2024-10-28 15:34:10.085840] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.362 [2024-10-28 15:34:10.085870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.362 [2024-10-28 15:34:10.109372] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.362 [2024-10-28 15:34:10.109440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.362 [2024-10-28 15:34:10.136733] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.362 [2024-10-28 15:34:10.136798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.362 [2024-10-28 15:34:10.162552] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.362 [2024-10-28 15:34:10.162582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.362 [2024-10-28 15:34:10.186329] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.362 [2024-10-28 15:34:10.186359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.362 [2024-10-28 15:34:10.207862] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.362 [2024-10-28 15:34:10.207929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.621 [2024-10-28 15:34:10.233357] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.621 [2024-10-28 15:34:10.233424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.621 [2024-10-28 15:34:10.257775] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.621 [2024-10-28 15:34:10.257811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.621 [2024-10-28 15:34:10.281308] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.621 [2024-10-28 15:34:10.281375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.621 [2024-10-28 15:34:10.306346] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.621 [2024-10-28 15:34:10.306415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.621 [2024-10-28 15:34:10.327035] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.621 [2024-10-28 15:34:10.327102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.621 6791.80 IOPS, 53.06 MiB/s [2024-10-28T14:34:10.488Z] [2024-10-28 15:34:10.349455] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.621 [2024-10-28 15:34:10.349522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.621 00:38:23.621 Latency(us) 00:38:23.621 [2024-10-28T14:34:10.488Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:23.621 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:38:23.621 Nvme1n1 : 5.02 6785.73 53.01 0.00 0.00 18817.08 3155.44 43884.85 00:38:23.621 [2024-10-28T14:34:10.488Z] =================================================================================================================== 00:38:23.621 [2024-10-28T14:34:10.488Z] Total : 6785.73 53.01 0.00 0.00 18817.08 3155.44 43884.85 00:38:23.621 [2024-10-28 15:34:10.358100] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.621 [2024-10-28 15:34:10.358162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.621 [2024-10-28 15:34:10.365860] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.621 [2024-10-28 15:34:10.365911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.621 [2024-10-28 15:34:10.377790] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.621 [2024-10-28 15:34:10.377817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.621 [2024-10-28 15:34:10.385751] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.621 [2024-10-28 15:34:10.385779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.621 [2024-10-28 15:34:10.393812] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.621 [2024-10-28 15:34:10.393864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.621 [2024-10-28 15:34:10.401804] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.621 [2024-10-28 15:34:10.401851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.621 [2024-10-28 15:34:10.409806] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.621 [2024-10-28 15:34:10.409854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.621 [2024-10-28 15:34:10.417817] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.621 [2024-10-28 15:34:10.417869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.621 [2024-10-28 15:34:10.425810] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.621 [2024-10-28 15:34:10.425863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.621 [2024-10-28 15:34:10.433828] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.621 [2024-10-28 15:34:10.433881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.621 [2024-10-28 15:34:10.441811] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.621 [2024-10-28 15:34:10.441866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.621 [2024-10-28 15:34:10.449819] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.621 [2024-10-28 15:34:10.449872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.621 [2024-10-28 15:34:10.457823] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.621 [2024-10-28 15:34:10.457878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.621 [2024-10-28 15:34:10.465829] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.621 [2024-10-28 15:34:10.465883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.621 [2024-10-28 15:34:10.473819] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.621 [2024-10-28 15:34:10.473873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.621 [2024-10-28 15:34:10.481821] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.621 [2024-10-28 15:34:10.481872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.882 [2024-10-28 15:34:10.489818] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.882 [2024-10-28 15:34:10.489865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.882 [2024-10-28 15:34:10.497807] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.882 [2024-10-28 15:34:10.497857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.882 [2024-10-28 15:34:10.505811] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.882 [2024-10-28 15:34:10.505863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.882 [2024-10-28 15:34:10.517886] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.882 [2024-10-28 15:34:10.517950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.882 [2024-10-28 15:34:10.529885] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.882 [2024-10-28 15:34:10.529948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.882 [2024-10-28 15:34:10.541876] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.882 [2024-10-28 15:34:10.541937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.882 [2024-10-28 15:34:10.549897] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.882 [2024-10-28 15:34:10.549965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.882 [2024-10-28 15:34:10.561760] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.882 [2024-10-28 15:34:10.561787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.882 [2024-10-28 15:34:10.569794] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.882 [2024-10-28 15:34:10.569842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.882 [2024-10-28 15:34:10.577815] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.882 [2024-10-28 15:34:10.577869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.882 [2024-10-28 15:34:10.585928] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.882 [2024-10-28 15:34:10.586011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.882 [2024-10-28 15:34:10.597897] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.882 [2024-10-28 15:34:10.597958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.882 [2024-10-28 15:34:10.609879] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.882 [2024-10-28 15:34:10.609938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.882 [2024-10-28 15:34:10.621823] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.882 [2024-10-28 15:34:10.621850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.883 [2024-10-28 15:34:10.633896] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.883 [2024-10-28 15:34:10.633956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3361779) - No such process 00:38:23.883 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3361779 00:38:23.883 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:23.883 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:23.883 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:23.883 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:23.883 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:23.883 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:23.883 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:23.883 delay0 00:38:23.883 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:23.883 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:38:23.883 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:23.883 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:23.883 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:23.883 15:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:38:24.143 [2024-10-28 15:34:10.819889] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:38:32.281 Initializing NVMe Controllers 00:38:32.281 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:32.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:32.281 Initialization complete. Launching workers. 00:38:32.281 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 245, failed: 8528 00:38:32.281 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 8651, failed to submit 122 00:38:32.281 success 8552, unsuccessful 99, failed 0 00:38:32.281 15:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:38:32.281 15:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:38:32.281 15:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:32.281 15:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:38:32.281 15:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:32.281 15:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:38:32.281 15:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:32.281 15:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:32.281 rmmod nvme_tcp 00:38:32.281 rmmod nvme_fabrics 00:38:32.281 rmmod nvme_keyring 00:38:32.281 15:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:32.281 15:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:38:32.281 15:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:38:32.281 15:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3360449 ']' 00:38:32.281 15:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3360449 00:38:32.281 15:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 3360449 ']' 00:38:32.281 15:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 3360449 00:38:32.281 15:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:38:32.281 15:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:32.281 15:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3360449 00:38:32.281 15:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:32.281 15:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:32.281 15:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3360449' 00:38:32.281 killing process with pid 3360449 00:38:32.281 15:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 3360449 00:38:32.281 15:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 3360449 00:38:32.281 15:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:32.281 15:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:32.281 15:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:32.281 15:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:38:32.281 15:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:38:32.281 15:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:38:32.281 15:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:32.281 15:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:32.281 15:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:32.281 15:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:32.281 15:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:32.281 15:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:33.666 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:33.666 00:38:33.666 real 0m30.806s 00:38:33.666 user 0m41.530s 00:38:33.666 sys 0m12.212s 00:38:33.666 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:33.666 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:33.666 ************************************ 00:38:33.666 END TEST nvmf_zcopy 00:38:33.666 ************************************ 00:38:33.926 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:38:33.926 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:33.926 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:33.926 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:33.926 ************************************ 00:38:33.926 START TEST nvmf_nmic 00:38:33.926 ************************************ 00:38:33.926 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:38:33.926 * Looking for test storage... 00:38:33.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:33.926 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:38:33.926 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1689 -- # lcov --version 00:38:33.926 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:38:34.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:34.187 --rc genhtml_branch_coverage=1 00:38:34.187 --rc genhtml_function_coverage=1 00:38:34.187 --rc genhtml_legend=1 00:38:34.187 --rc geninfo_all_blocks=1 00:38:34.187 --rc geninfo_unexecuted_blocks=1 00:38:34.187 00:38:34.187 ' 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:38:34.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:34.187 --rc genhtml_branch_coverage=1 00:38:34.187 --rc genhtml_function_coverage=1 00:38:34.187 --rc genhtml_legend=1 00:38:34.187 --rc geninfo_all_blocks=1 00:38:34.187 --rc geninfo_unexecuted_blocks=1 00:38:34.187 00:38:34.187 ' 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:38:34.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:34.187 --rc genhtml_branch_coverage=1 00:38:34.187 --rc genhtml_function_coverage=1 00:38:34.187 --rc genhtml_legend=1 00:38:34.187 --rc geninfo_all_blocks=1 00:38:34.187 --rc geninfo_unexecuted_blocks=1 00:38:34.187 00:38:34.187 ' 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:38:34.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:34.187 --rc genhtml_branch_coverage=1 00:38:34.187 --rc genhtml_function_coverage=1 00:38:34.187 --rc genhtml_legend=1 00:38:34.187 --rc geninfo_all_blocks=1 00:38:34.187 --rc geninfo_unexecuted_blocks=1 00:38:34.187 00:38:34.187 ' 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:38:34.187 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:38:34.188 15:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:38:37.531 Found 0000:84:00.0 (0x8086 - 0x159b) 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:38:37.531 Found 0000:84:00.1 (0x8086 - 0x159b) 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:38:37.531 Found net devices under 0000:84:00.0: cvl_0_0 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:38:37.531 Found net devices under 0000:84:00.1: cvl_0_1 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:37.531 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:37.532 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:37.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:37.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:38:37.532 00:38:37.532 --- 10.0.0.2 ping statistics --- 00:38:37.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:37.532 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:38:37.532 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:37.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:37.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:38:37.532 00:38:37.532 --- 10.0.0.1 ping statistics --- 00:38:37.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:37.532 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:38:37.532 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:37.532 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:38:37.532 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:37.532 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:37.532 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:37.532 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:37.532 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:37.532 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:37.532 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:37.532 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:38:37.532 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:37.532 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:37.532 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:37.532 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3365291 00:38:37.532 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:38:37.532 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3365291 00:38:37.532 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 3365291 ']' 00:38:37.532 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:37.532 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:37.532 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:37.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:37.532 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:37.532 15:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:37.532 [2024-10-28 15:34:23.949612] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:37.532 [2024-10-28 15:34:23.952334] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:38:37.532 [2024-10-28 15:34:23.952458] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:37.532 [2024-10-28 15:34:24.141042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:37.532 [2024-10-28 15:34:24.264371] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:37.532 [2024-10-28 15:34:24.264481] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:37.532 [2024-10-28 15:34:24.264518] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:37.532 [2024-10-28 15:34:24.264548] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:37.532 [2024-10-28 15:34:24.264575] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:37.532 [2024-10-28 15:34:24.268099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:37.532 [2024-10-28 15:34:24.268202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:37.532 [2024-10-28 15:34:24.268289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:37.532 [2024-10-28 15:34:24.268293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:37.792 [2024-10-28 15:34:24.441296] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:37.792 [2024-10-28 15:34:24.441790] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:37.792 [2024-10-28 15:34:24.442024] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:37.792 [2024-10-28 15:34:24.442701] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:37.792 [2024-10-28 15:34:24.442967] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:38.760 [2024-10-28 15:34:25.429524] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:38.760 Malloc0 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:38.760 [2024-10-28 15:34:25.525463] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:38:38.760 test case1: single bdev can't be used in multiple subsystems 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:38.760 [2024-10-28 15:34:25.549187] bdev.c:8192:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:38:38.760 [2024-10-28 15:34:25.549220] subsystem.c:2151:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:38:38.760 [2024-10-28 15:34:25.549237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:38.760 request: 00:38:38.760 { 00:38:38.760 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:38:38.760 "namespace": { 00:38:38.760 "bdev_name": "Malloc0", 00:38:38.760 "no_auto_visible": false 00:38:38.760 }, 00:38:38.760 "method": "nvmf_subsystem_add_ns", 00:38:38.760 "req_id": 1 00:38:38.760 } 00:38:38.760 Got JSON-RPC error response 00:38:38.760 response: 00:38:38.760 { 00:38:38.760 "code": -32602, 00:38:38.760 "message": "Invalid parameters" 00:38:38.760 } 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:38:38.760 Adding namespace failed - expected result. 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:38:38.760 test case2: host connect to nvmf target in multiple paths 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:38.760 [2024-10-28 15:34:25.557293] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:38.760 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:38:39.018 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:38:39.274 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:38:39.274 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:38:39.274 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:38:39.274 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:38:39.274 15:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:38:41.167 15:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:38:41.167 15:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:38:41.167 15:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:38:41.167 15:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:38:41.167 15:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:38:41.167 15:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:38:41.167 15:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:38:41.167 [global] 00:38:41.167 thread=1 00:38:41.167 invalidate=1 00:38:41.167 rw=write 00:38:41.167 time_based=1 00:38:41.167 runtime=1 00:38:41.167 ioengine=libaio 00:38:41.167 direct=1 00:38:41.167 bs=4096 00:38:41.167 iodepth=1 00:38:41.167 norandommap=0 00:38:41.167 numjobs=1 00:38:41.167 00:38:41.167 verify_dump=1 00:38:41.167 verify_backlog=512 00:38:41.167 verify_state_save=0 00:38:41.167 do_verify=1 00:38:41.167 verify=crc32c-intel 00:38:41.167 [job0] 00:38:41.167 filename=/dev/nvme0n1 00:38:41.167 Could not set queue depth (nvme0n1) 00:38:41.423 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:41.423 fio-3.35 00:38:41.423 Starting 1 thread 00:38:42.792 00:38:42.792 job0: (groupid=0, jobs=1): err= 0: pid=3365811: Mon Oct 28 15:34:29 2024 00:38:42.792 read: IOPS=21, BW=87.5KiB/s (89.6kB/s)(88.0KiB/1006msec) 00:38:42.792 slat (nsec): min=8306, max=15460, avg=14550.36, stdev=1432.07 00:38:42.792 clat (usec): min=40930, max=41018, avg=40977.69, stdev=20.33 00:38:42.792 lat (usec): min=40938, max=41033, avg=40992.24, stdev=21.10 00:38:42.792 clat percentiles (usec): 00:38:42.792 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:38:42.792 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:42.792 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:42.792 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:38:42.792 | 99.99th=[41157] 00:38:42.792 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:38:42.792 slat (nsec): min=9013, max=32688, avg=11128.41, stdev=2959.53 00:38:42.792 clat (usec): min=141, max=371, avg=188.02, stdev=48.85 00:38:42.792 lat (usec): min=151, max=382, avg=199.15, stdev=49.78 00:38:42.792 clat percentiles (usec): 00:38:42.792 | 1.00th=[ 143], 5.00th=[ 145], 10.00th=[ 145], 20.00th=[ 149], 00:38:42.792 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 167], 60.00th=[ 178], 00:38:42.792 | 70.00th=[ 198], 80.00th=[ 241], 90.00th=[ 262], 95.00th=[ 285], 00:38:42.792 | 99.00th=[ 338], 99.50th=[ 343], 99.90th=[ 371], 99.95th=[ 371], 00:38:42.792 | 99.99th=[ 371] 00:38:42.792 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:38:42.792 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:42.792 lat (usec) : 250=84.08%, 500=11.80% 00:38:42.792 lat (msec) : 50=4.12% 00:38:42.792 cpu : usr=0.20%, sys=0.90%, ctx=537, majf=0, minf=1 00:38:42.792 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:42.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:42.792 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:42.792 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:42.792 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:42.792 00:38:42.792 Run status group 0 (all jobs): 00:38:42.792 READ: bw=87.5KiB/s (89.6kB/s), 87.5KiB/s-87.5KiB/s (89.6kB/s-89.6kB/s), io=88.0KiB (90.1kB), run=1006-1006msec 00:38:42.792 WRITE: bw=2036KiB/s (2085kB/s), 2036KiB/s-2036KiB/s (2085kB/s-2085kB/s), io=2048KiB (2097kB), run=1006-1006msec 00:38:42.792 00:38:42.792 Disk stats (read/write): 00:38:42.792 nvme0n1: ios=47/512, merge=0/0, ticks=1763/97, in_queue=1860, util=98.50% 00:38:42.792 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:38:42.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:38:42.792 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:38:42.792 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:38:42.792 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:38:42.792 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:42.792 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:38:42.792 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:42.792 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:38:42.792 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:38:42.792 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:38:42.792 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:42.792 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:38:42.792 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:42.792 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:38:42.792 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:42.792 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:42.792 rmmod nvme_tcp 00:38:42.792 rmmod nvme_fabrics 00:38:42.792 rmmod nvme_keyring 00:38:42.792 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:42.792 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:38:42.792 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:38:42.792 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3365291 ']' 00:38:42.792 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3365291 00:38:42.792 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 3365291 ']' 00:38:42.792 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 3365291 00:38:42.792 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:38:42.792 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:42.792 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3365291 00:38:42.792 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:42.792 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:42.792 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3365291' 00:38:42.792 killing process with pid 3365291 00:38:42.792 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 3365291 00:38:42.792 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 3365291 00:38:43.359 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:43.359 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:43.359 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:43.359 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:38:43.359 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:38:43.359 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:43.359 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:38:43.359 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:43.359 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:43.359 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:43.359 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:43.359 15:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:45.261 15:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:45.261 00:38:45.261 real 0m11.383s 00:38:45.261 user 0m18.178s 00:38:45.261 sys 0m4.270s 00:38:45.261 15:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:45.261 15:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:45.261 ************************************ 00:38:45.261 END TEST nvmf_nmic 00:38:45.261 ************************************ 00:38:45.261 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:38:45.261 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:45.262 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:45.262 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:45.262 ************************************ 00:38:45.262 START TEST nvmf_fio_target 00:38:45.262 ************************************ 00:38:45.262 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:38:45.520 * Looking for test storage... 00:38:45.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:45.520 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:38:45.520 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1689 -- # lcov --version 00:38:45.520 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:38:45.520 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:38:45.520 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:45.520 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:45.520 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:45.520 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:38:45.520 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:38:45.520 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:38:45.520 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:38:45.520 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:38:45.520 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:38:45.520 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:38:45.520 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:45.520 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:38:45.520 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:38:45.520 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:45.520 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:45.520 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:38:45.520 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:38:45.520 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:45.520 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:38:45.520 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:38:45.520 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:38:45.520 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:38:45.520 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:45.520 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:38:45.520 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:38:45.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.521 --rc genhtml_branch_coverage=1 00:38:45.521 --rc genhtml_function_coverage=1 00:38:45.521 --rc genhtml_legend=1 00:38:45.521 --rc geninfo_all_blocks=1 00:38:45.521 --rc geninfo_unexecuted_blocks=1 00:38:45.521 00:38:45.521 ' 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:38:45.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.521 --rc genhtml_branch_coverage=1 00:38:45.521 --rc genhtml_function_coverage=1 00:38:45.521 --rc genhtml_legend=1 00:38:45.521 --rc geninfo_all_blocks=1 00:38:45.521 --rc geninfo_unexecuted_blocks=1 00:38:45.521 00:38:45.521 ' 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:38:45.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.521 --rc genhtml_branch_coverage=1 00:38:45.521 --rc genhtml_function_coverage=1 00:38:45.521 --rc genhtml_legend=1 00:38:45.521 --rc geninfo_all_blocks=1 00:38:45.521 --rc geninfo_unexecuted_blocks=1 00:38:45.521 00:38:45.521 ' 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:38:45.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.521 --rc genhtml_branch_coverage=1 00:38:45.521 --rc genhtml_function_coverage=1 00:38:45.521 --rc genhtml_legend=1 00:38:45.521 --rc geninfo_all_blocks=1 00:38:45.521 --rc geninfo_unexecuted_blocks=1 00:38:45.521 00:38:45.521 ' 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:38:45.521 15:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:38:48.808 Found 0000:84:00.0 (0x8086 - 0x159b) 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:38:48.808 Found 0000:84:00.1 (0x8086 - 0x159b) 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:38:48.808 Found net devices under 0000:84:00.0: cvl_0_0 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:38:48.808 Found net devices under 0000:84:00.1: cvl_0_1 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:48.808 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:48.809 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:48.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:48.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:38:48.809 00:38:48.809 --- 10.0.0.2 ping statistics --- 00:38:48.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:48.809 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:38:48.809 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:48.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:48.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:38:48.809 00:38:48.809 --- 10.0.0.1 ping statistics --- 00:38:48.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:48.809 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:38:48.809 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:48.809 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:38:48.809 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:48.809 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:48.809 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:48.809 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:48.809 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:48.809 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:48.809 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:48.809 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:38:48.809 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:48.809 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:48.809 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:48.809 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3368036 00:38:48.809 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:38:48.809 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3368036 00:38:48.809 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 3368036 ']' 00:38:48.809 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:48.809 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:48.809 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:48.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:48.809 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:48.809 15:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:48.809 [2024-10-28 15:34:35.467411] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:48.809 [2024-10-28 15:34:35.470156] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:38:48.809 [2024-10-28 15:34:35.470273] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:48.809 [2024-10-28 15:34:35.649670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:49.068 [2024-10-28 15:34:35.770455] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:49.068 [2024-10-28 15:34:35.770558] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:49.068 [2024-10-28 15:34:35.770596] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:49.068 [2024-10-28 15:34:35.770641] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:49.068 [2024-10-28 15:34:35.770688] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:49.068 [2024-10-28 15:34:35.774146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:49.068 [2024-10-28 15:34:35.774251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:49.068 [2024-10-28 15:34:35.774338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:49.068 [2024-10-28 15:34:35.774342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:49.327 [2024-10-28 15:34:35.939951] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:49.327 [2024-10-28 15:34:35.940507] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:49.327 [2024-10-28 15:34:35.940781] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:49.327 [2024-10-28 15:34:35.941784] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:49.327 [2024-10-28 15:34:35.942226] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:49.327 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:49.327 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:38:49.327 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:49.327 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:49.327 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:49.327 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:49.327 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:49.587 [2024-10-28 15:34:36.387271] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:49.587 15:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:50.527 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:38:50.527 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:51.096 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:38:51.096 15:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:51.666 15:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:38:51.666 15:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:52.607 15:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:38:52.607 15:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:38:53.176 15:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:53.746 15:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:38:53.746 15:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:54.683 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:38:54.683 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:55.253 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:38:55.253 15:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:38:55.824 15:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:38:56.395 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:38:56.395 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:57.333 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:38:57.333 15:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:38:57.592 15:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:57.849 [2024-10-28 15:34:44.619408] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:57.849 15:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:38:58.414 15:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:38:58.980 15:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:38:59.237 15:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:38:59.237 15:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:38:59.237 15:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:38:59.237 15:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:38:59.237 15:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:38:59.237 15:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:39:01.765 15:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:39:01.765 15:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:39:01.765 15:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:39:01.765 15:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:39:01.765 15:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:39:01.765 15:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:39:01.765 15:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:01.765 [global] 00:39:01.765 thread=1 00:39:01.765 invalidate=1 00:39:01.765 rw=write 00:39:01.765 time_based=1 00:39:01.765 runtime=1 00:39:01.765 ioengine=libaio 00:39:01.765 direct=1 00:39:01.765 bs=4096 00:39:01.765 iodepth=1 00:39:01.765 norandommap=0 00:39:01.765 numjobs=1 00:39:01.765 00:39:01.765 verify_dump=1 00:39:01.765 verify_backlog=512 00:39:01.765 verify_state_save=0 00:39:01.765 do_verify=1 00:39:01.765 verify=crc32c-intel 00:39:01.765 [job0] 00:39:01.765 filename=/dev/nvme0n1 00:39:01.765 [job1] 00:39:01.765 filename=/dev/nvme0n2 00:39:01.765 [job2] 00:39:01.765 filename=/dev/nvme0n3 00:39:01.765 [job3] 00:39:01.765 filename=/dev/nvme0n4 00:39:01.765 Could not set queue depth (nvme0n1) 00:39:01.765 Could not set queue depth (nvme0n2) 00:39:01.765 Could not set queue depth (nvme0n3) 00:39:01.765 Could not set queue depth (nvme0n4) 00:39:01.765 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:01.765 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:01.765 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:01.765 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:01.765 fio-3.35 00:39:01.765 Starting 4 threads 00:39:03.138 00:39:03.138 job0: (groupid=0, jobs=1): err= 0: pid=3369602: Mon Oct 28 15:34:49 2024 00:39:03.138 read: IOPS=23, BW=93.4KiB/s (95.6kB/s)(96.0KiB/1028msec) 00:39:03.138 slat (nsec): min=9395, max=31765, avg=15272.75, stdev=6026.22 00:39:03.138 clat (usec): min=369, max=41921, avg=37623.58, stdev=11458.90 00:39:03.138 lat (usec): min=379, max=41936, avg=37638.86, stdev=11459.72 00:39:03.138 clat percentiles (usec): 00:39:03.138 | 1.00th=[ 371], 5.00th=[ 486], 10.00th=[40633], 20.00th=[41157], 00:39:03.138 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:03.138 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:03.138 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:39:03.138 | 99.99th=[41681] 00:39:03.138 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:39:03.138 slat (nsec): min=9876, max=49400, avg=11614.49, stdev=3662.72 00:39:03.138 clat (usec): min=174, max=3160, avg=227.51, stdev=132.70 00:39:03.138 lat (usec): min=184, max=3181, avg=239.12, stdev=133.39 00:39:03.138 clat percentiles (usec): 00:39:03.138 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 202], 00:39:03.138 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 229], 00:39:03.138 | 70.00th=[ 233], 80.00th=[ 237], 90.00th=[ 243], 95.00th=[ 249], 00:39:03.138 | 99.00th=[ 273], 99.50th=[ 343], 99.90th=[ 3163], 99.95th=[ 3163], 00:39:03.138 | 99.99th=[ 3163] 00:39:03.138 bw ( KiB/s): min= 4096, max= 4096, per=20.56%, avg=4096.00, stdev= 0.00, samples=1 00:39:03.138 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:03.138 lat (usec) : 250=91.79%, 500=3.73%, 750=0.19% 00:39:03.138 lat (msec) : 4=0.19%, 50=4.10% 00:39:03.138 cpu : usr=0.29%, sys=0.58%, ctx=537, majf=0, minf=1 00:39:03.138 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:03.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.138 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:03.138 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:03.138 job1: (groupid=0, jobs=1): err= 0: pid=3369609: Mon Oct 28 15:34:49 2024 00:39:03.138 read: IOPS=1594, BW=6377KiB/s (6530kB/s)(6556KiB/1028msec) 00:39:03.138 slat (nsec): min=5245, max=45007, avg=10232.80, stdev=4999.05 00:39:03.138 clat (usec): min=205, max=42120, avg=343.36, stdev=1760.40 00:39:03.138 lat (usec): min=214, max=42135, avg=353.59, stdev=1760.71 00:39:03.138 clat percentiles (usec): 00:39:03.138 | 1.00th=[ 210], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 223], 00:39:03.138 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 243], 00:39:03.138 | 70.00th=[ 269], 80.00th=[ 330], 90.00th=[ 379], 95.00th=[ 416], 00:39:03.138 | 99.00th=[ 449], 99.50th=[ 461], 99.90th=[41157], 99.95th=[42206], 00:39:03.138 | 99.99th=[42206] 00:39:03.138 write: IOPS=1992, BW=7969KiB/s (8160kB/s)(8192KiB/1028msec); 0 zone resets 00:39:03.138 slat (nsec): min=6709, max=52323, avg=11176.73, stdev=3620.77 00:39:03.138 clat (usec): min=147, max=976, avg=201.63, stdev=49.93 00:39:03.138 lat (usec): min=157, max=986, avg=212.81, stdev=50.66 00:39:03.138 clat percentiles (usec): 00:39:03.138 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:39:03.138 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 196], 00:39:03.138 | 70.00th=[ 229], 80.00th=[ 243], 90.00th=[ 269], 95.00th=[ 277], 00:39:03.138 | 99.00th=[ 302], 99.50th=[ 388], 99.90th=[ 668], 99.95th=[ 701], 00:39:03.138 | 99.99th=[ 979] 00:39:03.138 bw ( KiB/s): min= 8192, max= 8192, per=41.12%, avg=8192.00, stdev= 0.00, samples=2 00:39:03.138 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:39:03.138 lat (usec) : 250=76.08%, 500=23.70%, 750=0.11%, 1000=0.03% 00:39:03.138 lat (msec) : 50=0.08% 00:39:03.138 cpu : usr=2.14%, sys=3.99%, ctx=3688, majf=0, minf=1 00:39:03.138 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:03.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.138 issued rwts: total=1639,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:03.138 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:03.138 job2: (groupid=0, jobs=1): err= 0: pid=3369614: Mon Oct 28 15:34:49 2024 00:39:03.138 read: IOPS=1847, BW=7389KiB/s (7566kB/s)(7396KiB/1001msec) 00:39:03.138 slat (nsec): min=5319, max=71748, avg=9982.73, stdev=4999.58 00:39:03.138 clat (usec): min=227, max=647, avg=285.12, stdev=48.52 00:39:03.138 lat (usec): min=233, max=664, avg=295.11, stdev=50.95 00:39:03.138 clat percentiles (usec): 00:39:03.138 | 1.00th=[ 241], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 255], 00:39:03.138 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:39:03.138 | 70.00th=[ 285], 80.00th=[ 302], 90.00th=[ 334], 95.00th=[ 388], 00:39:03.138 | 99.00th=[ 519], 99.50th=[ 562], 99.90th=[ 611], 99.95th=[ 652], 00:39:03.138 | 99.99th=[ 652] 00:39:03.138 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:39:03.138 slat (nsec): min=7120, max=42235, avg=10509.70, stdev=3195.69 00:39:03.138 clat (usec): min=166, max=536, avg=205.96, stdev=28.95 00:39:03.138 lat (usec): min=174, max=546, avg=216.47, stdev=30.35 00:39:03.138 clat percentiles (usec): 00:39:03.138 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 184], 00:39:03.138 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 204], 00:39:03.138 | 70.00th=[ 217], 80.00th=[ 229], 90.00th=[ 243], 95.00th=[ 258], 00:39:03.138 | 99.00th=[ 297], 99.50th=[ 322], 99.90th=[ 396], 99.95th=[ 416], 00:39:03.138 | 99.99th=[ 537] 00:39:03.138 bw ( KiB/s): min= 8192, max= 8192, per=41.12%, avg=8192.00, stdev= 0.00, samples=1 00:39:03.138 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:03.138 lat (usec) : 250=52.66%, 500=46.73%, 750=0.62% 00:39:03.138 cpu : usr=1.50%, sys=4.70%, ctx=3898, majf=0, minf=1 00:39:03.138 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:03.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.138 issued rwts: total=1849,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:03.138 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:03.138 job3: (groupid=0, jobs=1): err= 0: pid=3369615: Mon Oct 28 15:34:49 2024 00:39:03.138 read: IOPS=21, BW=85.6KiB/s (87.7kB/s)(88.0KiB/1028msec) 00:39:03.138 slat (nsec): min=7262, max=16185, avg=15392.95, stdev=1830.26 00:39:03.138 clat (usec): min=40383, max=42215, avg=41136.84, stdev=428.05 00:39:03.138 lat (usec): min=40391, max=42231, avg=41152.23, stdev=428.78 00:39:03.138 clat percentiles (usec): 00:39:03.138 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:03.138 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:03.138 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:39:03.138 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:03.138 | 99.99th=[42206] 00:39:03.138 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:39:03.138 slat (nsec): min=7119, max=36373, avg=10525.15, stdev=4945.22 00:39:03.138 clat (usec): min=184, max=1222, avg=225.01, stdev=57.91 00:39:03.138 lat (usec): min=191, max=1241, avg=235.54, stdev=58.87 00:39:03.138 clat percentiles (usec): 00:39:03.138 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 196], 00:39:03.138 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 215], 00:39:03.138 | 70.00th=[ 227], 80.00th=[ 253], 90.00th=[ 285], 95.00th=[ 302], 00:39:03.138 | 99.00th=[ 355], 99.50th=[ 363], 99.90th=[ 1221], 99.95th=[ 1221], 00:39:03.138 | 99.99th=[ 1221] 00:39:03.138 bw ( KiB/s): min= 4096, max= 4096, per=20.56%, avg=4096.00, stdev= 0.00, samples=1 00:39:03.138 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:03.138 lat (usec) : 250=76.22%, 500=19.48% 00:39:03.138 lat (msec) : 2=0.19%, 50=4.12% 00:39:03.138 cpu : usr=0.19%, sys=0.58%, ctx=536, majf=0, minf=1 00:39:03.138 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:03.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.138 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:03.138 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:03.138 00:39:03.138 Run status group 0 (all jobs): 00:39:03.138 READ: bw=13.4MiB/s (14.1MB/s), 85.6KiB/s-7389KiB/s (87.7kB/s-7566kB/s), io=13.8MiB (14.5MB), run=1001-1028msec 00:39:03.138 WRITE: bw=19.5MiB/s (20.4MB/s), 1992KiB/s-8184KiB/s (2040kB/s-8380kB/s), io=20.0MiB (21.0MB), run=1001-1028msec 00:39:03.138 00:39:03.138 Disk stats (read/write): 00:39:03.138 nvme0n1: ios=69/512, merge=0/0, ticks=728/114, in_queue=842, util=86.07% 00:39:03.138 nvme0n2: ios=1586/1852, merge=0/0, ticks=464/370, in_queue=834, util=90.10% 00:39:03.138 nvme0n3: ios=1593/1773, merge=0/0, ticks=1351/368, in_queue=1719, util=92.73% 00:39:03.138 nvme0n4: ios=41/512, merge=0/0, ticks=1605/114, in_queue=1719, util=94.26% 00:39:03.138 15:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:39:03.138 [global] 00:39:03.138 thread=1 00:39:03.138 invalidate=1 00:39:03.138 rw=randwrite 00:39:03.138 time_based=1 00:39:03.138 runtime=1 00:39:03.138 ioengine=libaio 00:39:03.138 direct=1 00:39:03.138 bs=4096 00:39:03.138 iodepth=1 00:39:03.138 norandommap=0 00:39:03.138 numjobs=1 00:39:03.138 00:39:03.138 verify_dump=1 00:39:03.138 verify_backlog=512 00:39:03.138 verify_state_save=0 00:39:03.138 do_verify=1 00:39:03.138 verify=crc32c-intel 00:39:03.138 [job0] 00:39:03.138 filename=/dev/nvme0n1 00:39:03.138 [job1] 00:39:03.138 filename=/dev/nvme0n2 00:39:03.138 [job2] 00:39:03.138 filename=/dev/nvme0n3 00:39:03.138 [job3] 00:39:03.138 filename=/dev/nvme0n4 00:39:03.138 Could not set queue depth (nvme0n1) 00:39:03.138 Could not set queue depth (nvme0n2) 00:39:03.138 Could not set queue depth (nvme0n3) 00:39:03.138 Could not set queue depth (nvme0n4) 00:39:03.138 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:03.138 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:03.138 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:03.138 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:03.138 fio-3.35 00:39:03.138 Starting 4 threads 00:39:04.513 00:39:04.513 job0: (groupid=0, jobs=1): err= 0: pid=3369957: Mon Oct 28 15:34:51 2024 00:39:04.513 read: IOPS=1961, BW=7844KiB/s (8032kB/s)(7852KiB/1001msec) 00:39:04.513 slat (nsec): min=6766, max=40490, avg=9208.73, stdev=2760.57 00:39:04.513 clat (usec): min=201, max=596, avg=281.70, stdev=53.67 00:39:04.513 lat (usec): min=210, max=612, avg=290.91, stdev=54.51 00:39:04.513 clat percentiles (usec): 00:39:04.513 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 243], 00:39:04.513 | 30.00th=[ 249], 40.00th=[ 262], 50.00th=[ 273], 60.00th=[ 285], 00:39:04.513 | 70.00th=[ 297], 80.00th=[ 310], 90.00th=[ 334], 95.00th=[ 388], 00:39:04.513 | 99.00th=[ 510], 99.50th=[ 553], 99.90th=[ 594], 99.95th=[ 594], 00:39:04.513 | 99.99th=[ 594] 00:39:04.513 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:39:04.513 slat (nsec): min=7967, max=49857, avg=12120.74, stdev=4944.81 00:39:04.513 clat (usec): min=140, max=889, avg=189.88, stdev=36.94 00:39:04.513 lat (usec): min=148, max=903, avg=202.00, stdev=37.81 00:39:04.513 clat percentiles (usec): 00:39:04.513 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:39:04.513 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 188], 00:39:04.513 | 70.00th=[ 200], 80.00th=[ 221], 90.00th=[ 241], 95.00th=[ 255], 00:39:04.513 | 99.00th=[ 289], 99.50th=[ 306], 99.90th=[ 375], 99.95th=[ 392], 00:39:04.513 | 99.99th=[ 889] 00:39:04.513 bw ( KiB/s): min= 8192, max= 8192, per=51.95%, avg=8192.00, stdev= 0.00, samples=1 00:39:04.513 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:04.513 lat (usec) : 250=63.38%, 500=36.05%, 750=0.55%, 1000=0.02% 00:39:04.513 cpu : usr=2.60%, sys=6.20%, ctx=4013, majf=0, minf=1 00:39:04.513 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:04.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:04.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:04.513 issued rwts: total=1963,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:04.513 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:04.513 job1: (groupid=0, jobs=1): err= 0: pid=3369958: Mon Oct 28 15:34:51 2024 00:39:04.513 read: IOPS=79, BW=318KiB/s (325kB/s)(328KiB/1033msec) 00:39:04.513 slat (nsec): min=7197, max=25412, avg=11241.30, stdev=4070.24 00:39:04.513 clat (usec): min=275, max=41270, avg=11256.63, stdev=18128.40 00:39:04.513 lat (usec): min=287, max=41279, avg=11267.87, stdev=18129.31 00:39:04.513 clat percentiles (usec): 00:39:04.513 | 1.00th=[ 277], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 302], 00:39:04.513 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 343], 60.00th=[ 420], 00:39:04.513 | 70.00th=[ 494], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:04.513 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:04.513 | 99.99th=[41157] 00:39:04.513 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:39:04.513 slat (nsec): min=7030, max=38016, avg=11731.33, stdev=5269.49 00:39:04.513 clat (usec): min=143, max=1374, avg=195.69, stdev=61.18 00:39:04.513 lat (usec): min=164, max=1398, avg=207.42, stdev=61.84 00:39:04.513 clat percentiles (usec): 00:39:04.513 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:39:04.513 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 192], 00:39:04.513 | 70.00th=[ 198], 80.00th=[ 210], 90.00th=[ 231], 95.00th=[ 249], 00:39:04.513 | 99.00th=[ 297], 99.50th=[ 379], 99.90th=[ 1369], 99.95th=[ 1369], 00:39:04.513 | 99.99th=[ 1369] 00:39:04.513 bw ( KiB/s): min= 4096, max= 4096, per=25.98%, avg=4096.00, stdev= 0.00, samples=1 00:39:04.513 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:04.513 lat (usec) : 250=81.99%, 500=13.64%, 750=0.51% 00:39:04.513 lat (msec) : 2=0.17%, 50=3.70% 00:39:04.513 cpu : usr=0.58%, sys=0.48%, ctx=595, majf=0, minf=2 00:39:04.513 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:04.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:04.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:04.513 issued rwts: total=82,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:04.513 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:04.513 job2: (groupid=0, jobs=1): err= 0: pid=3369959: Mon Oct 28 15:34:51 2024 00:39:04.513 read: IOPS=519, BW=2080KiB/s (2130kB/s)(2088KiB/1004msec) 00:39:04.513 slat (nsec): min=7928, max=29422, avg=10569.90, stdev=2213.03 00:39:04.513 clat (usec): min=213, max=41055, avg=1464.63, stdev=6797.55 00:39:04.513 lat (usec): min=221, max=41070, avg=1475.20, stdev=6798.19 00:39:04.513 clat percentiles (usec): 00:39:04.513 | 1.00th=[ 227], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 241], 00:39:04.513 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 262], 60.00th=[ 269], 00:39:04.513 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 306], 95.00th=[ 359], 00:39:04.513 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:04.513 | 99.99th=[41157] 00:39:04.513 write: IOPS=1019, BW=4080KiB/s (4178kB/s)(4096KiB/1004msec); 0 zone resets 00:39:04.513 slat (nsec): min=8120, max=50385, avg=10906.19, stdev=3636.28 00:39:04.513 clat (usec): min=154, max=814, avg=210.89, stdev=40.10 00:39:04.513 lat (usec): min=163, max=843, avg=221.80, stdev=41.04 00:39:04.513 clat percentiles (usec): 00:39:04.513 | 1.00th=[ 159], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 182], 00:39:04.513 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 206], 60.00th=[ 217], 00:39:04.513 | 70.00th=[ 227], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 265], 00:39:04.513 | 99.00th=[ 306], 99.50th=[ 314], 99.90th=[ 701], 99.95th=[ 816], 00:39:04.513 | 99.99th=[ 816] 00:39:04.513 bw ( KiB/s): min= 4096, max= 4096, per=25.98%, avg=4096.00, stdev= 0.00, samples=2 00:39:04.513 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:39:04.513 lat (usec) : 250=70.05%, 500=28.72%, 750=0.13%, 1000=0.06% 00:39:04.513 lat (msec) : 50=1.03% 00:39:04.513 cpu : usr=0.90%, sys=2.29%, ctx=1549, majf=0, minf=1 00:39:04.513 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:04.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:04.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:04.513 issued rwts: total=522,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:04.513 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:04.513 job3: (groupid=0, jobs=1): err= 0: pid=3369960: Mon Oct 28 15:34:51 2024 00:39:04.513 read: IOPS=336, BW=1347KiB/s (1380kB/s)(1400KiB/1039msec) 00:39:04.513 slat (nsec): min=6910, max=26763, avg=9217.33, stdev=2775.15 00:39:04.513 clat (usec): min=231, max=41948, avg=2628.34, stdev=9465.64 00:39:04.513 lat (usec): min=240, max=41963, avg=2637.56, stdev=9466.90 00:39:04.513 clat percentiles (usec): 00:39:04.513 | 1.00th=[ 235], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 262], 00:39:04.513 | 30.00th=[ 269], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 310], 00:39:04.513 | 70.00th=[ 330], 80.00th=[ 355], 90.00th=[ 396], 95.00th=[41157], 00:39:04.513 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:39:04.513 | 99.99th=[42206] 00:39:04.513 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:39:04.513 slat (nsec): min=8368, max=38791, avg=10088.77, stdev=2100.22 00:39:04.513 clat (usec): min=155, max=1343, avg=209.50, stdev=58.73 00:39:04.513 lat (usec): min=164, max=1361, avg=219.59, stdev=59.25 00:39:04.513 clat percentiles (usec): 00:39:04.513 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 178], 00:39:04.514 | 30.00th=[ 184], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 212], 00:39:04.514 | 70.00th=[ 239], 80.00th=[ 241], 90.00th=[ 243], 95.00th=[ 245], 00:39:04.514 | 99.00th=[ 251], 99.50th=[ 334], 99.90th=[ 1352], 99.95th=[ 1352], 00:39:04.514 | 99.99th=[ 1352] 00:39:04.514 bw ( KiB/s): min= 4096, max= 4096, per=25.98%, avg=4096.00, stdev= 0.00, samples=1 00:39:04.514 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:04.514 lat (usec) : 250=62.06%, 500=35.38%, 750=0.12% 00:39:04.514 lat (msec) : 2=0.12%, 50=2.32% 00:39:04.514 cpu : usr=0.19%, sys=1.45%, ctx=862, majf=0, minf=2 00:39:04.514 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:04.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:04.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:04.514 issued rwts: total=350,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:04.514 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:04.514 00:39:04.514 Run status group 0 (all jobs): 00:39:04.514 READ: bw=11.0MiB/s (11.5MB/s), 318KiB/s-7844KiB/s (325kB/s-8032kB/s), io=11.4MiB (11.9MB), run=1001-1039msec 00:39:04.514 WRITE: bw=15.4MiB/s (16.1MB/s), 1971KiB/s-8184KiB/s (2018kB/s-8380kB/s), io=16.0MiB (16.8MB), run=1001-1039msec 00:39:04.514 00:39:04.514 Disk stats (read/write): 00:39:04.514 nvme0n1: ios=1589/1778, merge=0/0, ticks=1117/331, in_queue=1448, util=98.00% 00:39:04.514 nvme0n2: ios=82/512, merge=0/0, ticks=725/84, in_queue=809, util=85.19% 00:39:04.514 nvme0n3: ios=573/1024, merge=0/0, ticks=1461/207, in_queue=1668, util=98.30% 00:39:04.514 nvme0n4: ios=344/512, merge=0/0, ticks=712/106, in_queue=818, util=89.39% 00:39:04.514 15:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:39:04.514 [global] 00:39:04.514 thread=1 00:39:04.514 invalidate=1 00:39:04.514 rw=write 00:39:04.514 time_based=1 00:39:04.514 runtime=1 00:39:04.514 ioengine=libaio 00:39:04.514 direct=1 00:39:04.514 bs=4096 00:39:04.514 iodepth=128 00:39:04.514 norandommap=0 00:39:04.514 numjobs=1 00:39:04.514 00:39:04.514 verify_dump=1 00:39:04.514 verify_backlog=512 00:39:04.514 verify_state_save=0 00:39:04.514 do_verify=1 00:39:04.514 verify=crc32c-intel 00:39:04.514 [job0] 00:39:04.514 filename=/dev/nvme0n1 00:39:04.514 [job1] 00:39:04.514 filename=/dev/nvme0n2 00:39:04.514 [job2] 00:39:04.514 filename=/dev/nvme0n3 00:39:04.514 [job3] 00:39:04.514 filename=/dev/nvme0n4 00:39:04.514 Could not set queue depth (nvme0n1) 00:39:04.514 Could not set queue depth (nvme0n2) 00:39:04.514 Could not set queue depth (nvme0n3) 00:39:04.514 Could not set queue depth (nvme0n4) 00:39:04.514 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:04.514 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:04.514 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:04.514 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:04.514 fio-3.35 00:39:04.514 Starting 4 threads 00:39:05.889 00:39:05.889 job0: (groupid=0, jobs=1): err= 0: pid=3370180: Mon Oct 28 15:34:52 2024 00:39:05.889 read: IOPS=2956, BW=11.5MiB/s (12.1MB/s)(12.3MiB/1061msec) 00:39:05.889 slat (usec): min=2, max=46150, avg=157.20, stdev=1289.41 00:39:05.889 clat (msec): min=7, max=105, avg=18.93, stdev=11.94 00:39:05.889 lat (msec): min=7, max=105, avg=19.09, stdev=12.06 00:39:05.889 clat percentiles (msec): 00:39:05.889 | 1.00th=[ 9], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 12], 00:39:05.889 | 30.00th=[ 14], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 18], 00:39:05.889 | 70.00th=[ 20], 80.00th=[ 23], 90.00th=[ 27], 95.00th=[ 30], 00:39:05.889 | 99.00th=[ 97], 99.50th=[ 97], 99.90th=[ 97], 99.95th=[ 106], 00:39:05.889 | 99.99th=[ 106] 00:39:05.889 write: IOPS=3377, BW=13.2MiB/s (13.8MB/s)(14.0MiB/1061msec); 0 zone resets 00:39:05.889 slat (usec): min=3, max=14796, avg=132.78, stdev=820.47 00:39:05.889 clat (usec): min=1317, max=110726, avg=20965.80, stdev=14658.94 00:39:05.889 lat (usec): min=1324, max=116456, avg=21098.58, stdev=14706.50 00:39:05.889 clat percentiles (msec): 00:39:05.889 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 10], 20.00th=[ 12], 00:39:05.889 | 30.00th=[ 13], 40.00th=[ 17], 50.00th=[ 19], 60.00th=[ 22], 00:39:05.889 | 70.00th=[ 24], 80.00th=[ 27], 90.00th=[ 34], 95.00th=[ 41], 00:39:05.889 | 99.00th=[ 107], 99.50th=[ 111], 99.90th=[ 111], 99.95th=[ 111], 00:39:05.889 | 99.99th=[ 111] 00:39:05.889 bw ( KiB/s): min=12336, max=15832, per=24.31%, avg=14084.00, stdev=2472.05, samples=2 00:39:05.889 iops : min= 3084, max= 3958, avg=3521.00, stdev=618.01, samples=2 00:39:05.889 lat (msec) : 2=0.07%, 4=0.21%, 10=6.78%, 20=56.87%, 50=34.18% 00:39:05.889 lat (msec) : 100=0.94%, 250=0.95% 00:39:05.889 cpu : usr=1.51%, sys=3.87%, ctx=246, majf=0, minf=1 00:39:05.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:39:05.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:05.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:05.889 issued rwts: total=3137,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:05.889 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:05.889 job1: (groupid=0, jobs=1): err= 0: pid=3370181: Mon Oct 28 15:34:52 2024 00:39:05.889 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:39:05.889 slat (usec): min=3, max=10478, avg=97.34, stdev=474.33 00:39:05.889 clat (msec): min=7, max=100, avg=13.54, stdev= 9.02 00:39:05.889 lat (msec): min=7, max=100, avg=13.63, stdev= 9.04 00:39:05.889 clat percentiles (msec): 00:39:05.889 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 10], 00:39:05.889 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:39:05.889 | 70.00th=[ 13], 80.00th=[ 15], 90.00th=[ 23], 95.00th=[ 27], 00:39:05.889 | 99.00th=[ 55], 99.50th=[ 71], 99.90th=[ 101], 99.95th=[ 101], 00:39:05.889 | 99.99th=[ 101] 00:39:05.889 write: IOPS=4553, BW=17.8MiB/s (18.7MB/s)(17.8MiB/1002msec); 0 zone resets 00:39:05.889 slat (usec): min=4, max=10793, avg=124.08, stdev=659.01 00:39:05.889 clat (usec): min=1471, max=96531, avg=15581.64, stdev=13024.04 00:39:05.889 lat (usec): min=1480, max=96538, avg=15705.71, stdev=13113.45 00:39:05.889 clat percentiles (usec): 00:39:05.889 | 1.00th=[ 5014], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10290], 00:39:05.889 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11207], 60.00th=[11600], 00:39:05.889 | 70.00th=[13304], 80.00th=[15008], 90.00th=[22676], 95.00th=[34341], 00:39:05.889 | 99.00th=[79168], 99.50th=[90702], 99.90th=[96994], 99.95th=[96994], 00:39:05.889 | 99.99th=[96994] 00:39:05.889 bw ( KiB/s): min=15696, max=19792, per=30.63%, avg=17744.00, stdev=2896.31, samples=2 00:39:05.889 iops : min= 3924, max= 4948, avg=4436.00, stdev=724.08, samples=2 00:39:05.890 lat (msec) : 2=0.20%, 10=25.30%, 20=60.28%, 50=11.35%, 100=2.81% 00:39:05.890 lat (msec) : 250=0.06% 00:39:05.890 cpu : usr=4.30%, sys=7.39%, ctx=575, majf=0, minf=1 00:39:05.890 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:39:05.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:05.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:05.890 issued rwts: total=4096,4563,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:05.890 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:05.890 job2: (groupid=0, jobs=1): err= 0: pid=3370182: Mon Oct 28 15:34:52 2024 00:39:05.890 read: IOPS=4335, BW=16.9MiB/s (17.8MB/s)(17.1MiB/1008msec) 00:39:05.890 slat (usec): min=2, max=12514, avg=111.79, stdev=787.03 00:39:05.890 clat (usec): min=2714, max=51011, avg=14780.44, stdev=7355.73 00:39:05.890 lat (usec): min=2719, max=51028, avg=14892.23, stdev=7413.20 00:39:05.890 clat percentiles (usec): 00:39:05.890 | 1.00th=[ 5866], 5.00th=[ 7439], 10.00th=[ 9110], 20.00th=[10683], 00:39:05.890 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12125], 60.00th=[12780], 00:39:05.890 | 70.00th=[14353], 80.00th=[17433], 90.00th=[23987], 95.00th=[34866], 00:39:05.890 | 99.00th=[39060], 99.50th=[39060], 99.90th=[44827], 99.95th=[45351], 00:39:05.890 | 99.99th=[51119] 00:39:05.890 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:39:05.890 slat (usec): min=4, max=10901, avg=96.39, stdev=695.72 00:39:05.890 clat (usec): min=676, max=37393, avg=13650.91, stdev=5763.91 00:39:05.890 lat (usec): min=681, max=37401, avg=13747.30, stdev=5799.87 00:39:05.890 clat percentiles (usec): 00:39:05.890 | 1.00th=[ 4293], 5.00th=[ 6980], 10.00th=[ 7832], 20.00th=[ 9896], 00:39:05.890 | 30.00th=[10552], 40.00th=[11863], 50.00th=[12256], 60.00th=[12649], 00:39:05.890 | 70.00th=[14484], 80.00th=[17171], 90.00th=[21103], 95.00th=[25822], 00:39:05.890 | 99.00th=[33817], 99.50th=[37487], 99.90th=[37487], 99.95th=[37487], 00:39:05.890 | 99.99th=[37487] 00:39:05.890 bw ( KiB/s): min=16720, max=20144, per=31.81%, avg=18432.00, stdev=2421.13, samples=2 00:39:05.890 iops : min= 4180, max= 5036, avg=4608.00, stdev=605.28, samples=2 00:39:05.890 lat (usec) : 750=0.06% 00:39:05.890 lat (msec) : 2=0.01%, 4=0.68%, 10=17.54%, 20=67.03%, 50=14.67% 00:39:05.890 lat (msec) : 100=0.01% 00:39:05.890 cpu : usr=3.48%, sys=6.75%, ctx=291, majf=0, minf=1 00:39:05.890 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:39:05.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:05.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:05.890 issued rwts: total=4370,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:05.890 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:05.890 job3: (groupid=0, jobs=1): err= 0: pid=3370183: Mon Oct 28 15:34:52 2024 00:39:05.890 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:39:05.890 slat (usec): min=4, max=13750, avg=153.47, stdev=936.59 00:39:05.890 clat (usec): min=6039, max=54617, avg=18639.97, stdev=7544.56 00:39:05.890 lat (usec): min=6053, max=54625, avg=18793.44, stdev=7643.28 00:39:05.890 clat percentiles (usec): 00:39:05.890 | 1.00th=[ 6915], 5.00th=[ 9896], 10.00th=[12387], 20.00th=[12911], 00:39:05.890 | 30.00th=[13173], 40.00th=[13829], 50.00th=[15795], 60.00th=[19006], 00:39:05.890 | 70.00th=[22152], 80.00th=[25822], 90.00th=[27395], 95.00th=[31327], 00:39:05.890 | 99.00th=[45351], 99.50th=[49021], 99.90th=[54789], 99.95th=[54789], 00:39:05.890 | 99.99th=[54789] 00:39:05.890 write: IOPS=2605, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1003msec); 0 zone resets 00:39:05.890 slat (usec): min=4, max=8846, avg=225.02, stdev=873.51 00:39:05.890 clat (usec): min=369, max=78486, avg=30259.79, stdev=19173.05 00:39:05.890 lat (usec): min=3327, max=78502, avg=30484.81, stdev=19295.54 00:39:05.890 clat percentiles (usec): 00:39:05.890 | 1.00th=[ 3687], 5.00th=[12387], 10.00th=[12518], 20.00th=[12911], 00:39:05.890 | 30.00th=[13173], 40.00th=[14484], 50.00th=[23987], 60.00th=[31589], 00:39:05.890 | 70.00th=[41681], 80.00th=[47449], 90.00th=[61604], 95.00th=[69731], 00:39:05.890 | 99.00th=[72877], 99.50th=[72877], 99.90th=[78119], 99.95th=[78119], 00:39:05.890 | 99.99th=[78119] 00:39:05.890 bw ( KiB/s): min= 8208, max=12247, per=17.65%, avg=10227.50, stdev=2856.00, samples=2 00:39:05.890 iops : min= 2052, max= 3061, avg=2556.50, stdev=713.47, samples=2 00:39:05.890 lat (usec) : 500=0.02% 00:39:05.890 lat (msec) : 4=0.66%, 10=3.15%, 20=50.61%, 50=36.63%, 100=8.93% 00:39:05.890 cpu : usr=3.59%, sys=3.39%, ctx=357, majf=0, minf=1 00:39:05.890 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:39:05.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:05.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:05.890 issued rwts: total=2560,2613,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:05.890 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:05.890 00:39:05.890 Run status group 0 (all jobs): 00:39:05.890 READ: bw=52.1MiB/s (54.7MB/s), 9.97MiB/s-16.9MiB/s (10.5MB/s-17.8MB/s), io=55.3MiB (58.0MB), run=1002-1061msec 00:39:05.890 WRITE: bw=56.6MiB/s (59.3MB/s), 10.2MiB/s-17.9MiB/s (10.7MB/s-18.7MB/s), io=60.0MiB (62.9MB), run=1002-1061msec 00:39:05.890 00:39:05.890 Disk stats (read/write): 00:39:05.890 nvme0n1: ios=2659/3072, merge=0/0, ticks=28124/36832, in_queue=64956, util=99.90% 00:39:05.890 nvme0n2: ios=3634/3927, merge=0/0, ticks=13981/21058, in_queue=35039, util=96.84% 00:39:05.890 nvme0n3: ios=3641/3815, merge=0/0, ticks=39590/37185, in_queue=76775, util=96.73% 00:39:05.890 nvme0n4: ios=2048/2215, merge=0/0, ticks=16834/28689, in_queue=45523, util=89.13% 00:39:05.890 15:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:39:05.890 [global] 00:39:05.890 thread=1 00:39:05.890 invalidate=1 00:39:05.890 rw=randwrite 00:39:05.890 time_based=1 00:39:05.890 runtime=1 00:39:05.890 ioengine=libaio 00:39:05.890 direct=1 00:39:05.890 bs=4096 00:39:05.890 iodepth=128 00:39:05.890 norandommap=0 00:39:05.890 numjobs=1 00:39:05.890 00:39:05.890 verify_dump=1 00:39:05.890 verify_backlog=512 00:39:05.890 verify_state_save=0 00:39:05.890 do_verify=1 00:39:05.890 verify=crc32c-intel 00:39:05.890 [job0] 00:39:05.890 filename=/dev/nvme0n1 00:39:05.890 [job1] 00:39:05.890 filename=/dev/nvme0n2 00:39:05.890 [job2] 00:39:05.890 filename=/dev/nvme0n3 00:39:05.890 [job3] 00:39:05.890 filename=/dev/nvme0n4 00:39:05.890 Could not set queue depth (nvme0n1) 00:39:05.890 Could not set queue depth (nvme0n2) 00:39:05.890 Could not set queue depth (nvme0n3) 00:39:05.890 Could not set queue depth (nvme0n4) 00:39:06.148 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:06.148 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:06.148 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:06.148 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:06.148 fio-3.35 00:39:06.148 Starting 4 threads 00:39:07.528 00:39:07.528 job0: (groupid=0, jobs=1): err= 0: pid=3370414: Mon Oct 28 15:34:54 2024 00:39:07.528 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:39:07.528 slat (usec): min=2, max=6935, avg=121.15, stdev=605.56 00:39:07.528 clat (usec): min=8247, max=31556, avg=14952.68, stdev=4213.97 00:39:07.529 lat (usec): min=8254, max=31564, avg=15073.83, stdev=4250.09 00:39:07.529 clat percentiles (usec): 00:39:07.529 | 1.00th=[ 8455], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[11863], 00:39:07.529 | 30.00th=[12256], 40.00th=[12780], 50.00th=[13829], 60.00th=[15664], 00:39:07.529 | 70.00th=[16909], 80.00th=[17433], 90.00th=[19792], 95.00th=[21627], 00:39:07.529 | 99.00th=[31065], 99.50th=[31589], 99.90th=[31589], 99.95th=[31589], 00:39:07.529 | 99.99th=[31589] 00:39:07.529 write: IOPS=3740, BW=14.6MiB/s (15.3MB/s)(14.7MiB/1005msec); 0 zone resets 00:39:07.529 slat (usec): min=3, max=25402, avg=142.78, stdev=784.27 00:39:07.529 clat (usec): min=4042, max=45681, avg=19237.34, stdev=7861.55 00:39:07.529 lat (usec): min=4823, max=45744, avg=19380.12, stdev=7905.96 00:39:07.529 clat percentiles (usec): 00:39:07.529 | 1.00th=[ 7242], 5.00th=[10421], 10.00th=[11338], 20.00th=[11731], 00:39:07.529 | 30.00th=[11863], 40.00th=[13304], 50.00th=[19268], 60.00th=[21890], 00:39:07.529 | 70.00th=[22938], 80.00th=[27132], 90.00th=[31065], 95.00th=[32637], 00:39:07.529 | 99.00th=[36963], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:39:07.529 | 99.99th=[45876] 00:39:07.529 bw ( KiB/s): min=13032, max=16024, per=21.98%, avg=14528.00, stdev=2115.66, samples=2 00:39:07.529 iops : min= 3258, max= 4006, avg=3632.00, stdev=528.92, samples=2 00:39:07.529 lat (msec) : 10=6.06%, 20=64.36%, 50=29.58% 00:39:07.529 cpu : usr=3.39%, sys=6.18%, ctx=437, majf=0, minf=1 00:39:07.529 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:39:07.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:07.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:07.529 issued rwts: total=3584,3759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:07.529 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:07.529 job1: (groupid=0, jobs=1): err= 0: pid=3370415: Mon Oct 28 15:34:54 2024 00:39:07.529 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:39:07.529 slat (usec): min=2, max=8251, avg=124.69, stdev=613.78 00:39:07.529 clat (usec): min=7387, max=34704, avg=15830.97, stdev=5894.51 00:39:07.529 lat (usec): min=7393, max=34709, avg=15955.67, stdev=5930.38 00:39:07.529 clat percentiles (usec): 00:39:07.529 | 1.00th=[ 8455], 5.00th=[ 9110], 10.00th=[10421], 20.00th=[10814], 00:39:07.529 | 30.00th=[11731], 40.00th=[12518], 50.00th=[13304], 60.00th=[14746], 00:39:07.529 | 70.00th=[19268], 80.00th=[21627], 90.00th=[23462], 95.00th=[27395], 00:39:07.529 | 99.00th=[32375], 99.50th=[32375], 99.90th=[34866], 99.95th=[34866], 00:39:07.529 | 99.99th=[34866] 00:39:07.529 write: IOPS=4130, BW=16.1MiB/s (16.9MB/s)(16.2MiB/1003msec); 0 zone resets 00:39:07.529 slat (usec): min=3, max=20001, avg=111.54, stdev=701.15 00:39:07.529 clat (usec): min=300, max=35483, avg=15068.54, stdev=5182.81 00:39:07.529 lat (usec): min=853, max=35510, avg=15180.08, stdev=5203.41 00:39:07.529 clat percentiles (usec): 00:39:07.529 | 1.00th=[ 5080], 5.00th=[ 9503], 10.00th=[10421], 20.00th=[10683], 00:39:07.529 | 30.00th=[12125], 40.00th=[13173], 50.00th=[14484], 60.00th=[15926], 00:39:07.529 | 70.00th=[16319], 80.00th=[17171], 90.00th=[22676], 95.00th=[27919], 00:39:07.529 | 99.00th=[32113], 99.50th=[32637], 99.90th=[33817], 99.95th=[33817], 00:39:07.529 | 99.99th=[35390] 00:39:07.529 bw ( KiB/s): min=13224, max=19544, per=24.79%, avg=16384.00, stdev=4468.91, samples=2 00:39:07.529 iops : min= 3306, max= 4886, avg=4096.00, stdev=1117.23, samples=2 00:39:07.529 lat (usec) : 500=0.01%, 1000=0.01% 00:39:07.529 lat (msec) : 4=0.44%, 10=6.28%, 20=73.31%, 50=19.95% 00:39:07.529 cpu : usr=3.59%, sys=4.89%, ctx=357, majf=0, minf=2 00:39:07.529 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:39:07.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:07.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:07.529 issued rwts: total=4096,4143,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:07.529 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:07.529 job2: (groupid=0, jobs=1): err= 0: pid=3370418: Mon Oct 28 15:34:54 2024 00:39:07.529 read: IOPS=4243, BW=16.6MiB/s (17.4MB/s)(16.6MiB/1003msec) 00:39:07.529 slat (usec): min=2, max=7509, avg=112.48, stdev=570.57 00:39:07.529 clat (usec): min=1318, max=30216, avg=14247.74, stdev=3261.55 00:39:07.529 lat (usec): min=3295, max=30232, avg=14360.22, stdev=3282.46 00:39:07.529 clat percentiles (usec): 00:39:07.529 | 1.00th=[ 5407], 5.00th=[10421], 10.00th=[11207], 20.00th=[11994], 00:39:07.529 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13566], 60.00th=[14091], 00:39:07.529 | 70.00th=[14877], 80.00th=[15926], 90.00th=[18482], 95.00th=[21890], 00:39:07.529 | 99.00th=[25035], 99.50th=[25035], 99.90th=[25560], 99.95th=[29492], 00:39:07.529 | 99.99th=[30278] 00:39:07.529 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:39:07.529 slat (usec): min=4, max=6804, avg=105.80, stdev=515.22 00:39:07.529 clat (usec): min=7775, max=26086, avg=14297.97, stdev=2388.00 00:39:07.529 lat (usec): min=7785, max=26096, avg=14403.76, stdev=2409.21 00:39:07.529 clat percentiles (usec): 00:39:07.529 | 1.00th=[ 9634], 5.00th=[11600], 10.00th=[12125], 20.00th=[12649], 00:39:07.529 | 30.00th=[13042], 40.00th=[13435], 50.00th=[13698], 60.00th=[14091], 00:39:07.529 | 70.00th=[14484], 80.00th=[15664], 90.00th=[18220], 95.00th=[19268], 00:39:07.529 | 99.00th=[21627], 99.50th=[22414], 99.90th=[23200], 99.95th=[23725], 00:39:07.529 | 99.99th=[26084] 00:39:07.529 bw ( KiB/s): min=18416, max=18448, per=27.89%, avg=18432.00, stdev=22.63, samples=2 00:39:07.529 iops : min= 4604, max= 4612, avg=4608.00, stdev= 5.66, samples=2 00:39:07.529 lat (msec) : 2=0.01%, 4=0.29%, 10=2.29%, 20=92.43%, 50=4.98% 00:39:07.529 cpu : usr=3.99%, sys=6.99%, ctx=516, majf=0, minf=1 00:39:07.529 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:39:07.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:07.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:07.529 issued rwts: total=4256,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:07.529 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:07.529 job3: (groupid=0, jobs=1): err= 0: pid=3370419: Mon Oct 28 15:34:54 2024 00:39:07.529 read: IOPS=4044, BW=15.8MiB/s (16.6MB/s)(15.8MiB/1003msec) 00:39:07.529 slat (usec): min=2, max=8972, avg=112.08, stdev=573.97 00:39:07.529 clat (usec): min=627, max=35877, avg=14524.23, stdev=4603.62 00:39:07.529 lat (usec): min=3406, max=35899, avg=14636.32, stdev=4630.46 00:39:07.529 clat percentiles (usec): 00:39:07.529 | 1.00th=[ 4490], 5.00th=[ 8717], 10.00th=[10814], 20.00th=[11994], 00:39:07.529 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13566], 60.00th=[13829], 00:39:07.529 | 70.00th=[15008], 80.00th=[17171], 90.00th=[20579], 95.00th=[24249], 00:39:07.529 | 99.00th=[31589], 99.50th=[33162], 99.90th=[33162], 99.95th=[33424], 00:39:07.529 | 99.99th=[35914] 00:39:07.529 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:39:07.529 slat (usec): min=3, max=6983, avg=127.29, stdev=629.39 00:39:07.529 clat (usec): min=7450, max=46998, avg=16590.20, stdev=7978.76 00:39:07.529 lat (usec): min=7455, max=47004, avg=16717.49, stdev=8035.16 00:39:07.529 clat percentiles (usec): 00:39:07.529 | 1.00th=[ 8586], 5.00th=[10290], 10.00th=[11600], 20.00th=[12387], 00:39:07.529 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13173], 60.00th=[13304], 00:39:07.529 | 70.00th=[14222], 80.00th=[19268], 90.00th=[33162], 95.00th=[34866], 00:39:07.529 | 99.00th=[41681], 99.50th=[46400], 99.90th=[46924], 99.95th=[46924], 00:39:07.529 | 99.99th=[46924] 00:39:07.529 bw ( KiB/s): min=16024, max=16744, per=24.79%, avg=16384.00, stdev=509.12, samples=2 00:39:07.529 iops : min= 4006, max= 4186, avg=4096.00, stdev=127.28, samples=2 00:39:07.529 lat (usec) : 750=0.01% 00:39:07.529 lat (msec) : 4=0.28%, 10=6.02%, 20=78.33%, 50=15.36% 00:39:07.529 cpu : usr=2.20%, sys=5.39%, ctx=461, majf=0, minf=1 00:39:07.529 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:39:07.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:07.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:07.529 issued rwts: total=4057,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:07.529 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:07.529 00:39:07.529 Run status group 0 (all jobs): 00:39:07.529 READ: bw=62.2MiB/s (65.2MB/s), 13.9MiB/s-16.6MiB/s (14.6MB/s-17.4MB/s), io=62.5MiB (65.5MB), run=1003-1005msec 00:39:07.529 WRITE: bw=64.5MiB/s (67.7MB/s), 14.6MiB/s-17.9MiB/s (15.3MB/s-18.8MB/s), io=64.9MiB (68.0MB), run=1003-1005msec 00:39:07.529 00:39:07.529 Disk stats (read/write): 00:39:07.529 nvme0n1: ios=3114/3127, merge=0/0, ticks=20809/27194, in_queue=48003, util=85.87% 00:39:07.529 nvme0n2: ios=3328/3584, merge=0/0, ticks=16327/17527, in_queue=33854, util=85.12% 00:39:07.529 nvme0n3: ios=3621/3839, merge=0/0, ticks=16934/16949, in_queue=33883, util=96.95% 00:39:07.529 nvme0n4: ios=3072/3584, merge=0/0, ticks=17365/22482, in_queue=39847, util=89.49% 00:39:07.529 15:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:39:07.529 15:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:39:07.529 15:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3370549 00:39:07.529 15:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:39:07.529 [global] 00:39:07.529 thread=1 00:39:07.529 invalidate=1 00:39:07.529 rw=read 00:39:07.529 time_based=1 00:39:07.529 runtime=10 00:39:07.529 ioengine=libaio 00:39:07.529 direct=1 00:39:07.529 bs=4096 00:39:07.529 iodepth=1 00:39:07.529 norandommap=1 00:39:07.529 numjobs=1 00:39:07.529 00:39:07.529 [job0] 00:39:07.529 filename=/dev/nvme0n1 00:39:07.529 [job1] 00:39:07.529 filename=/dev/nvme0n2 00:39:07.529 [job2] 00:39:07.529 filename=/dev/nvme0n3 00:39:07.529 [job3] 00:39:07.529 filename=/dev/nvme0n4 00:39:07.529 Could not set queue depth (nvme0n1) 00:39:07.529 Could not set queue depth (nvme0n2) 00:39:07.529 Could not set queue depth (nvme0n3) 00:39:07.529 Could not set queue depth (nvme0n4) 00:39:07.529 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:07.529 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:07.529 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:07.529 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:07.529 fio-3.35 00:39:07.529 Starting 4 threads 00:39:10.809 15:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:39:10.809 15:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:39:10.809 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=765952, buflen=4096 00:39:10.809 fio: pid=3370763, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:11.066 15:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:11.066 15:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:39:11.066 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=12353536, buflen=4096 00:39:11.066 fio: pid=3370762, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:11.323 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=32116736, buflen=4096 00:39:11.323 fio: pid=3370722, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:11.323 15:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:11.323 15:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:39:11.581 15:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:11.581 15:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:39:11.839 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=4005888, buflen=4096 00:39:11.839 fio: pid=3370740, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:11.839 00:39:11.839 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3370722: Mon Oct 28 15:34:58 2024 00:39:11.839 read: IOPS=2223, BW=8893KiB/s (9106kB/s)(30.6MiB/3527msec) 00:39:11.839 slat (usec): min=4, max=11898, avg=11.97, stdev=198.28 00:39:11.839 clat (usec): min=193, max=41260, avg=432.99, stdev=2713.84 00:39:11.839 lat (usec): min=199, max=41276, avg=444.96, stdev=2721.73 00:39:11.839 clat percentiles (usec): 00:39:11.839 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 215], 00:39:11.839 | 30.00th=[ 221], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 245], 00:39:11.839 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 318], 00:39:11.839 | 99.00th=[ 510], 99.50th=[ 988], 99.90th=[41157], 99.95th=[41157], 00:39:11.839 | 99.99th=[41157] 00:39:11.839 bw ( KiB/s): min= 104, max=15560, per=63.58%, avg=7846.67, stdev=7290.38, samples=6 00:39:11.839 iops : min= 26, max= 3890, avg=1961.67, stdev=1822.59, samples=6 00:39:11.839 lat (usec) : 250=62.84%, 500=36.04%, 750=0.42%, 1000=0.19% 00:39:11.839 lat (msec) : 2=0.04%, 10=0.01%, 50=0.45% 00:39:11.839 cpu : usr=0.26%, sys=3.35%, ctx=7848, majf=0, minf=1 00:39:11.839 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:11.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.839 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.839 issued rwts: total=7842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.839 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:11.839 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3370740: Mon Oct 28 15:34:58 2024 00:39:11.839 read: IOPS=251, BW=1004KiB/s (1028kB/s)(3912KiB/3897msec) 00:39:11.839 slat (usec): min=6, max=15789, avg=31.91, stdev=549.37 00:39:11.839 clat (usec): min=199, max=44979, avg=3918.81, stdev=11689.73 00:39:11.839 lat (usec): min=206, max=56924, avg=3950.72, stdev=11777.83 00:39:11.839 clat percentiles (usec): 00:39:11.839 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 221], 00:39:11.839 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 247], 00:39:11.839 | 70.00th=[ 255], 80.00th=[ 273], 90.00th=[ 453], 95.00th=[41157], 00:39:11.839 | 99.00th=[41157], 99.50th=[41157], 99.90th=[44827], 99.95th=[44827], 00:39:11.839 | 99.99th=[44827] 00:39:11.839 bw ( KiB/s): min= 96, max= 6456, per=8.95%, avg=1105.43, stdev=2370.01, samples=7 00:39:11.839 iops : min= 24, max= 1614, avg=276.29, stdev=592.54, samples=7 00:39:11.839 lat (usec) : 250=63.94%, 500=26.25%, 750=0.72% 00:39:11.839 lat (msec) : 50=8.99% 00:39:11.839 cpu : usr=0.13%, sys=0.33%, ctx=984, majf=0, minf=1 00:39:11.839 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:11.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.839 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.839 issued rwts: total=979,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.839 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:11.839 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3370762: Mon Oct 28 15:34:58 2024 00:39:11.839 read: IOPS=931, BW=3726KiB/s (3815kB/s)(11.8MiB/3238msec) 00:39:11.839 slat (usec): min=7, max=7568, avg=12.62, stdev=163.66 00:39:11.839 clat (usec): min=226, max=41985, avg=1050.87, stdev=5508.60 00:39:11.839 lat (usec): min=234, max=42000, avg=1063.49, stdev=5511.74 00:39:11.839 clat percentiles (usec): 00:39:11.839 | 1.00th=[ 237], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 262], 00:39:11.839 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:39:11.839 | 70.00th=[ 289], 80.00th=[ 302], 90.00th=[ 343], 95.00th=[ 404], 00:39:11.839 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:39:11.839 | 99.99th=[42206] 00:39:11.839 bw ( KiB/s): min= 96, max=12376, per=32.51%, avg=4012.00, stdev=5807.15, samples=6 00:39:11.839 iops : min= 24, max= 3094, avg=1003.00, stdev=1451.79, samples=6 00:39:11.839 lat (usec) : 250=7.95%, 500=89.13%, 750=0.96% 00:39:11.839 lat (msec) : 20=0.07%, 50=1.86% 00:39:11.839 cpu : usr=0.59%, sys=1.05%, ctx=3021, majf=0, minf=2 00:39:11.839 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:11.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.839 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.839 issued rwts: total=3017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.839 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:11.839 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3370763: Mon Oct 28 15:34:58 2024 00:39:11.839 read: IOPS=64, BW=257KiB/s (263kB/s)(748KiB/2914msec) 00:39:11.839 slat (nsec): min=7046, max=35173, avg=11591.36, stdev=5865.84 00:39:11.839 clat (usec): min=226, max=43947, avg=15438.66, stdev=19793.88 00:39:11.839 lat (usec): min=233, max=43965, avg=15450.24, stdev=19797.67 00:39:11.839 clat percentiles (usec): 00:39:11.839 | 1.00th=[ 255], 5.00th=[ 265], 10.00th=[ 277], 20.00th=[ 302], 00:39:11.839 | 30.00th=[ 330], 40.00th=[ 363], 50.00th=[ 408], 60.00th=[ 457], 00:39:11.839 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:39:11.839 | 99.00th=[42206], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:39:11.839 | 99.99th=[43779] 00:39:11.839 bw ( KiB/s): min= 96, max= 992, per=2.29%, avg=283.20, stdev=396.61, samples=5 00:39:11.839 iops : min= 24, max= 248, avg=70.80, stdev=99.15, samples=5 00:39:11.839 lat (usec) : 250=0.53%, 500=60.64%, 750=1.60% 00:39:11.839 lat (msec) : 50=36.70% 00:39:11.839 cpu : usr=0.00%, sys=0.17%, ctx=188, majf=0, minf=2 00:39:11.839 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:11.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.839 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.839 issued rwts: total=188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.839 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:11.839 00:39:11.839 Run status group 0 (all jobs): 00:39:11.839 READ: bw=12.0MiB/s (12.6MB/s), 257KiB/s-8893KiB/s (263kB/s-9106kB/s), io=47.0MiB (49.2MB), run=2914-3897msec 00:39:11.839 00:39:11.839 Disk stats (read/write): 00:39:11.839 nvme0n1: ios=7106/0, merge=0/0, ticks=3827/0, in_queue=3827, util=99.20% 00:39:11.839 nvme0n2: ios=1016/0, merge=0/0, ticks=4028/0, in_queue=4028, util=99.37% 00:39:11.839 nvme0n3: ios=3056/0, merge=0/0, ticks=3931/0, in_queue=3931, util=99.34% 00:39:11.839 nvme0n4: ios=185/0, merge=0/0, ticks=2804/0, in_queue=2804, util=96.67% 00:39:12.402 15:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:12.402 15:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:39:12.660 15:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:12.660 15:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:39:12.919 15:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:12.919 15:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:39:13.218 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:13.218 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:39:13.811 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:39:13.812 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3370549 00:39:13.812 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:39:13.812 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:13.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:13.812 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:13.812 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:39:13.812 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:39:13.812 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:13.812 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:39:13.812 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:13.812 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:39:13.812 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:39:13.812 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:39:13.812 nvmf hotplug test: fio failed as expected 00:39:13.812 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:14.069 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:39:14.069 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:39:14.069 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:39:14.069 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:39:14.069 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:39:14.069 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:14.069 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:39:14.069 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:14.069 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:39:14.069 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:14.069 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:14.069 rmmod nvme_tcp 00:39:14.069 rmmod nvme_fabrics 00:39:14.069 rmmod nvme_keyring 00:39:14.069 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:14.069 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:39:14.069 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:39:14.069 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3368036 ']' 00:39:14.069 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3368036 00:39:14.069 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 3368036 ']' 00:39:14.069 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 3368036 00:39:14.069 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:39:14.069 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:14.070 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3368036 00:39:14.329 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:14.329 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:14.329 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3368036' 00:39:14.329 killing process with pid 3368036 00:39:14.329 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 3368036 00:39:14.329 15:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 3368036 00:39:14.588 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:14.588 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:14.588 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:14.588 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:39:14.588 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:39:14.588 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:14.588 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:39:14.588 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:14.588 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:14.588 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:14.589 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:14.589 15:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:16.493 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:16.493 00:39:16.493 real 0m31.269s 00:39:16.493 user 1m21.843s 00:39:16.493 sys 0m12.454s 00:39:16.493 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:16.493 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:16.493 ************************************ 00:39:16.493 END TEST nvmf_fio_target 00:39:16.493 ************************************ 00:39:16.493 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:16.493 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:16.493 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:16.493 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:16.752 ************************************ 00:39:16.752 START TEST nvmf_bdevio 00:39:16.752 ************************************ 00:39:16.752 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:16.752 * Looking for test storage... 00:39:16.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:16.752 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:39:16.752 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1689 -- # lcov --version 00:39:16.752 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:39:17.012 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:39:17.012 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:17.012 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:17.012 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:17.012 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:39:17.012 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:39:17.012 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:39:17.012 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:39:17.012 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:39:17.012 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:39:17.012 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:39:17.012 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:17.012 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:39:17.012 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:39:17.012 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:17.012 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:17.012 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:39:17.012 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:39:17.012 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:17.012 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:39:17.012 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:39:17.012 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:39:17.012 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:39:17.012 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:17.012 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:39:17.012 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:39:17.012 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:17.012 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:17.012 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:39:17.012 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:17.012 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:39:17.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:17.012 --rc genhtml_branch_coverage=1 00:39:17.013 --rc genhtml_function_coverage=1 00:39:17.013 --rc genhtml_legend=1 00:39:17.013 --rc geninfo_all_blocks=1 00:39:17.013 --rc geninfo_unexecuted_blocks=1 00:39:17.013 00:39:17.013 ' 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:39:17.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:17.013 --rc genhtml_branch_coverage=1 00:39:17.013 --rc genhtml_function_coverage=1 00:39:17.013 --rc genhtml_legend=1 00:39:17.013 --rc geninfo_all_blocks=1 00:39:17.013 --rc geninfo_unexecuted_blocks=1 00:39:17.013 00:39:17.013 ' 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:39:17.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:17.013 --rc genhtml_branch_coverage=1 00:39:17.013 --rc genhtml_function_coverage=1 00:39:17.013 --rc genhtml_legend=1 00:39:17.013 --rc geninfo_all_blocks=1 00:39:17.013 --rc geninfo_unexecuted_blocks=1 00:39:17.013 00:39:17.013 ' 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:39:17.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:17.013 --rc genhtml_branch_coverage=1 00:39:17.013 --rc genhtml_function_coverage=1 00:39:17.013 --rc genhtml_legend=1 00:39:17.013 --rc geninfo_all_blocks=1 00:39:17.013 --rc geninfo_unexecuted_blocks=1 00:39:17.013 00:39:17.013 ' 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:39:17.013 15:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:39:20.299 Found 0000:84:00.0 (0x8086 - 0x159b) 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:20.299 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:39:20.300 Found 0000:84:00.1 (0x8086 - 0x159b) 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:39:20.300 Found net devices under 0000:84:00.0: cvl_0_0 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:39:20.300 Found net devices under 0000:84:00.1: cvl_0_1 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:20.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:20.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:39:20.300 00:39:20.300 --- 10.0.0.2 ping statistics --- 00:39:20.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:20.300 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:20.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:20.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:39:20.300 00:39:20.300 --- 10.0.0.1 ping statistics --- 00:39:20.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:20.300 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3373540 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3373540 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 3373540 ']' 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:20.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:20.300 15:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:20.300 [2024-10-28 15:35:07.079398] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:20.300 [2024-10-28 15:35:07.081368] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:39:20.300 [2024-10-28 15:35:07.081496] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:20.559 [2024-10-28 15:35:07.208536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:20.559 [2024-10-28 15:35:07.278283] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:20.559 [2024-10-28 15:35:07.278356] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:20.559 [2024-10-28 15:35:07.278375] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:20.559 [2024-10-28 15:35:07.278391] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:20.559 [2024-10-28 15:35:07.278403] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:20.559 [2024-10-28 15:35:07.280372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:20.559 [2024-10-28 15:35:07.280450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:39:20.559 [2024-10-28 15:35:07.280537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:39:20.559 [2024-10-28 15:35:07.280541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:20.559 [2024-10-28 15:35:07.384743] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:20.559 [2024-10-28 15:35:07.384982] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:20.559 [2024-10-28 15:35:07.385285] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:20.559 [2024-10-28 15:35:07.385969] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:20.559 [2024-10-28 15:35:07.386235] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:20.559 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:20.559 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:39:20.559 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:20.559 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:20.559 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:20.819 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:20.819 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:20.819 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.819 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:20.819 [2024-10-28 15:35:07.465402] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:20.819 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.819 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:20.819 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.819 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:20.819 Malloc0 00:39:20.819 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.819 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:20.819 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.819 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:20.819 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.819 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:20.819 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.819 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:20.819 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.819 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:20.819 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.819 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:20.819 [2024-10-28 15:35:07.537556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:20.819 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.819 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:39:20.819 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:39:20.819 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:39:20.819 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:39:20.819 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:20.819 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:20.819 { 00:39:20.819 "params": { 00:39:20.819 "name": "Nvme$subsystem", 00:39:20.819 "trtype": "$TEST_TRANSPORT", 00:39:20.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:20.819 "adrfam": "ipv4", 00:39:20.819 "trsvcid": "$NVMF_PORT", 00:39:20.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:20.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:20.819 "hdgst": ${hdgst:-false}, 00:39:20.819 "ddgst": ${ddgst:-false} 00:39:20.819 }, 00:39:20.819 "method": "bdev_nvme_attach_controller" 00:39:20.819 } 00:39:20.819 EOF 00:39:20.819 )") 00:39:20.819 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:39:20.819 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:39:20.819 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:39:20.819 15:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:20.819 "params": { 00:39:20.819 "name": "Nvme1", 00:39:20.819 "trtype": "tcp", 00:39:20.819 "traddr": "10.0.0.2", 00:39:20.819 "adrfam": "ipv4", 00:39:20.819 "trsvcid": "4420", 00:39:20.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:20.819 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:20.819 "hdgst": false, 00:39:20.819 "ddgst": false 00:39:20.819 }, 00:39:20.819 "method": "bdev_nvme_attach_controller" 00:39:20.819 }' 00:39:20.819 [2024-10-28 15:35:07.596854] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:39:20.819 [2024-10-28 15:35:07.596999] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3373687 ] 00:39:21.077 [2024-10-28 15:35:07.704538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:21.077 [2024-10-28 15:35:07.774813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:21.077 [2024-10-28 15:35:07.774868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:21.077 [2024-10-28 15:35:07.774872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:21.336 I/O targets: 00:39:21.336 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:39:21.336 00:39:21.336 00:39:21.336 CUnit - A unit testing framework for C - Version 2.1-3 00:39:21.336 http://cunit.sourceforge.net/ 00:39:21.336 00:39:21.336 00:39:21.336 Suite: bdevio tests on: Nvme1n1 00:39:21.336 Test: blockdev write read block ...passed 00:39:21.336 Test: blockdev write zeroes read block ...passed 00:39:21.336 Test: blockdev write zeroes read no split ...passed 00:39:21.336 Test: blockdev write zeroes read split ...passed 00:39:21.336 Test: blockdev write zeroes read split partial ...passed 00:39:21.336 Test: blockdev reset ...[2024-10-28 15:35:08.114748] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:39:21.336 [2024-10-28 15:35:08.114873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c3e80 (9): Bad file descriptor 00:39:21.336 [2024-10-28 15:35:08.120096] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:39:21.336 passed 00:39:21.336 Test: blockdev write read 8 blocks ...passed 00:39:21.336 Test: blockdev write read size > 128k ...passed 00:39:21.336 Test: blockdev write read invalid size ...passed 00:39:21.336 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:21.336 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:21.336 Test: blockdev write read max offset ...passed 00:39:21.595 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:21.595 Test: blockdev writev readv 8 blocks ...passed 00:39:21.595 Test: blockdev writev readv 30 x 1block ...passed 00:39:21.595 Test: blockdev writev readv block ...passed 00:39:21.595 Test: blockdev writev readv size > 128k ...passed 00:39:21.595 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:21.595 Test: blockdev comparev and writev ...[2024-10-28 15:35:08.297194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:21.595 [2024-10-28 15:35:08.297233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:21.595 [2024-10-28 15:35:08.297259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:21.595 [2024-10-28 15:35:08.297276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:21.595 [2024-10-28 15:35:08.297804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:21.595 [2024-10-28 15:35:08.297831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:39:21.595 [2024-10-28 15:35:08.297854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:21.595 [2024-10-28 15:35:08.297871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:39:21.595 [2024-10-28 15:35:08.298384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:21.595 [2024-10-28 15:35:08.298410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:39:21.595 [2024-10-28 15:35:08.298432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:21.595 [2024-10-28 15:35:08.298456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:39:21.595 [2024-10-28 15:35:08.298987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:21.595 [2024-10-28 15:35:08.299014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:39:21.595 [2024-10-28 15:35:08.299036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:21.595 [2024-10-28 15:35:08.299052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:39:21.595 passed 00:39:21.595 Test: blockdev nvme passthru rw ...passed 00:39:21.595 Test: blockdev nvme passthru vendor specific ...[2024-10-28 15:35:08.380992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:21.595 [2024-10-28 15:35:08.381020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:39:21.595 [2024-10-28 15:35:08.381174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:21.595 [2024-10-28 15:35:08.381197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:39:21.595 [2024-10-28 15:35:08.381351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:21.595 [2024-10-28 15:35:08.381376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:39:21.595 [2024-10-28 15:35:08.381526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:21.595 [2024-10-28 15:35:08.381550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:39:21.595 passed 00:39:21.595 Test: blockdev nvme admin passthru ...passed 00:39:21.595 Test: blockdev copy ...passed 00:39:21.595 00:39:21.595 Run Summary: Type Total Ran Passed Failed Inactive 00:39:21.595 suites 1 1 n/a 0 0 00:39:21.595 tests 23 23 23 0 0 00:39:21.595 asserts 152 152 152 0 n/a 00:39:21.595 00:39:21.595 Elapsed time = 0.945 seconds 00:39:21.853 15:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:21.853 15:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.853 15:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:21.853 15:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.853 15:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:39:21.853 15:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:39:21.853 15:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:21.853 15:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:39:21.853 15:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:21.853 15:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:39:21.853 15:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:21.853 15:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:21.853 rmmod nvme_tcp 00:39:21.853 rmmod nvme_fabrics 00:39:21.853 rmmod nvme_keyring 00:39:21.853 15:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:21.853 15:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:39:21.853 15:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:39:21.853 15:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3373540 ']' 00:39:21.853 15:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3373540 00:39:21.853 15:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 3373540 ']' 00:39:21.853 15:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 3373540 00:39:21.853 15:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:39:22.111 15:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:22.112 15:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3373540 00:39:22.112 15:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:39:22.112 15:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:39:22.112 15:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3373540' 00:39:22.112 killing process with pid 3373540 00:39:22.112 15:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 3373540 00:39:22.112 15:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 3373540 00:39:22.371 15:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:22.371 15:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:22.371 15:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:22.372 15:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:39:22.372 15:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:39:22.372 15:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:22.372 15:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:39:22.372 15:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:22.372 15:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:22.372 15:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:22.372 15:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:22.372 15:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:24.280 15:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:24.280 00:39:24.280 real 0m7.688s 00:39:24.280 user 0m8.368s 00:39:24.280 sys 0m3.567s 00:39:24.280 15:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:24.280 15:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:24.280 ************************************ 00:39:24.280 END TEST nvmf_bdevio 00:39:24.280 ************************************ 00:39:24.280 15:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:39:24.280 00:39:24.280 real 4m52.312s 00:39:24.280 user 9m58.605s 00:39:24.280 sys 1m50.774s 00:39:24.280 15:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:24.280 15:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:24.280 ************************************ 00:39:24.280 END TEST nvmf_target_core_interrupt_mode 00:39:24.280 ************************************ 00:39:24.280 15:35:11 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:24.280 15:35:11 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:24.280 15:35:11 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:24.280 15:35:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:24.540 ************************************ 00:39:24.540 START TEST nvmf_interrupt 00:39:24.540 ************************************ 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:24.540 * Looking for test storage... 00:39:24.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1689 -- # lcov --version 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:24.540 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:39:24.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:24.541 --rc genhtml_branch_coverage=1 00:39:24.541 --rc genhtml_function_coverage=1 00:39:24.541 --rc genhtml_legend=1 00:39:24.541 --rc geninfo_all_blocks=1 00:39:24.541 --rc geninfo_unexecuted_blocks=1 00:39:24.541 00:39:24.541 ' 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:39:24.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:24.541 --rc genhtml_branch_coverage=1 00:39:24.541 --rc genhtml_function_coverage=1 00:39:24.541 --rc genhtml_legend=1 00:39:24.541 --rc geninfo_all_blocks=1 00:39:24.541 --rc geninfo_unexecuted_blocks=1 00:39:24.541 00:39:24.541 ' 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:39:24.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:24.541 --rc genhtml_branch_coverage=1 00:39:24.541 --rc genhtml_function_coverage=1 00:39:24.541 --rc genhtml_legend=1 00:39:24.541 --rc geninfo_all_blocks=1 00:39:24.541 --rc geninfo_unexecuted_blocks=1 00:39:24.541 00:39:24.541 ' 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:39:24.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:24.541 --rc genhtml_branch_coverage=1 00:39:24.541 --rc genhtml_function_coverage=1 00:39:24.541 --rc genhtml_legend=1 00:39:24.541 --rc geninfo_all_blocks=1 00:39:24.541 --rc geninfo_unexecuted_blocks=1 00:39:24.541 00:39:24.541 ' 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:39:24.541 15:35:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:39:27.835 Found 0000:84:00.0 (0x8086 - 0x159b) 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:39:27.835 Found 0000:84:00.1 (0x8086 - 0x159b) 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:39:27.835 Found net devices under 0000:84:00.0: cvl_0_0 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:39:27.835 Found net devices under 0000:84:00.1: cvl_0_1 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:27.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:27.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:39:27.835 00:39:27.835 --- 10.0.0.2 ping statistics --- 00:39:27.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:27.835 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:27.835 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:27.835 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:39:27.835 00:39:27.835 --- 10.0.0.1 ping statistics --- 00:39:27.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:27.835 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3375824 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3375824 00:39:27.835 15:35:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 3375824 ']' 00:39:27.836 15:35:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:27.836 15:35:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:27.836 15:35:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:27.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:27.836 15:35:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:27.836 15:35:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:27.836 [2024-10-28 15:35:14.640842] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:27.836 [2024-10-28 15:35:14.642169] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:39:27.836 [2024-10-28 15:35:14.642294] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:28.095 [2024-10-28 15:35:14.789478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:28.095 [2024-10-28 15:35:14.906125] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:28.095 [2024-10-28 15:35:14.906255] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:28.095 [2024-10-28 15:35:14.906294] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:28.095 [2024-10-28 15:35:14.906325] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:28.095 [2024-10-28 15:35:14.906351] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:28.095 [2024-10-28 15:35:14.909340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:28.095 [2024-10-28 15:35:14.909357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:28.355 [2024-10-28 15:35:15.075210] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:28.355 [2024-10-28 15:35:15.075353] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:28.355 [2024-10-28 15:35:15.075675] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:28.355 15:35:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:28.355 15:35:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:39:28.355 15:35:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:28.355 15:35:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:28.355 15:35:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:28.355 15:35:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:28.355 15:35:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:39:28.355 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:39:28.355 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:39:28.355 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:39:28.616 5000+0 records in 00:39:28.616 5000+0 records out 00:39:28.616 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0227791 s, 450 MB/s 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:28.616 AIO0 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:28.616 [2024-10-28 15:35:15.282825] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:28.616 [2024-10-28 15:35:15.319114] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3375824 0 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3375824 0 idle 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3375824 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3375824 -w 256 00:39:28.616 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3375824 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.53 reactor_0' 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3375824 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.53 reactor_0 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3375824 1 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3375824 1 idle 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3375824 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3375824 -w 256 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3375930 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.00 reactor_1' 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3375930 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.00 reactor_1 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:28.876 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:28.877 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:28.877 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:28.877 15:35:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:39:28.877 15:35:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3375975 00:39:28.877 15:35:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:28.877 15:35:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:39:28.877 15:35:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:39:28.877 15:35:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3375824 0 00:39:28.877 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3375824 0 busy 00:39:28.877 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3375824 00:39:28.877 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:28.877 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:28.877 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:39:28.877 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:28.877 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:39:28.877 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:28.877 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:28.877 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:28.877 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3375824 -w 256 00:39:28.877 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:29.135 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3375824 root 20 0 128.2g 47616 34560 S 6.7 0.1 0:00.55 reactor_0' 00:39:29.136 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3375824 root 20 0 128.2g 47616 34560 S 6.7 0.1 0:00.55 reactor_0 00:39:29.136 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:29.136 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:29.136 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:39:29.136 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:39:29.136 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:29.136 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:29.136 15:35:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:39:30.069 15:35:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:39:30.069 15:35:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:30.069 15:35:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3375824 -w 256 00:39:30.069 15:35:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:30.326 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3375824 root 20 0 128.2g 48000 34560 R 99.9 0.1 0:02.71 reactor_0' 00:39:30.327 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3375824 root 20 0 128.2g 48000 34560 R 99.9 0.1 0:02.71 reactor_0 00:39:30.327 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:30.327 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:30.327 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:39:30.327 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:39:30.327 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:30.327 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:30.327 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:39:30.327 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:30.327 15:35:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:39:30.327 15:35:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:39:30.327 15:35:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3375824 1 00:39:30.327 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3375824 1 busy 00:39:30.327 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3375824 00:39:30.327 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:30.327 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:30.327 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:39:30.327 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:30.327 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:39:30.327 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:30.327 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:30.327 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:30.327 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3375824 -w 256 00:39:30.327 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:30.584 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3375930 root 20 0 128.2g 48000 34560 R 99.9 0.1 0:01.26 reactor_1' 00:39:30.585 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3375930 root 20 0 128.2g 48000 34560 R 99.9 0.1 0:01.26 reactor_1 00:39:30.585 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:30.585 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:30.585 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:39:30.585 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:39:30.585 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:30.585 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:30.585 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:39:30.585 15:35:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:30.585 15:35:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3375975 00:39:40.549 Initializing NVMe Controllers 00:39:40.549 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:40.549 Controller IO queue size 256, less than required. 00:39:40.549 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:40.549 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:40.549 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:40.549 Initialization complete. Launching workers. 00:39:40.549 ======================================================== 00:39:40.549 Latency(us) 00:39:40.549 Device Information : IOPS MiB/s Average min max 00:39:40.549 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13812.49 53.96 18544.34 5313.46 24048.03 00:39:40.549 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 14324.39 55.95 17881.67 3489.99 24041.54 00:39:40.549 ======================================================== 00:39:40.549 Total : 28136.89 109.91 18206.98 3489.99 24048.03 00:39:40.549 00:39:40.549 15:35:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:39:40.549 15:35:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3375824 0 00:39:40.549 15:35:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3375824 0 idle 00:39:40.549 15:35:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3375824 00:39:40.549 15:35:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:40.549 15:35:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:40.549 15:35:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:40.549 15:35:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:40.549 15:35:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:40.549 15:35:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:40.549 15:35:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:40.549 15:35:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:40.549 15:35:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:40.549 15:35:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3375824 -w 256 00:39:40.549 15:35:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:40.549 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3375824 root 20 0 128.2g 48000 34560 S 0.0 0.1 0:20.50 reactor_0' 00:39:40.549 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3375824 root 20 0 128.2g 48000 34560 S 0.0 0.1 0:20.50 reactor_0 00:39:40.549 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:40.549 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:40.549 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:40.549 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:40.549 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:40.549 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:40.549 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:40.549 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:40.549 15:35:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:39:40.549 15:35:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3375824 1 00:39:40.549 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3375824 1 idle 00:39:40.549 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3375824 00:39:40.549 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:40.549 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:40.549 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:40.549 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:40.550 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:40.550 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:40.550 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:40.550 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:40.550 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:40.550 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3375824 -w 256 00:39:40.550 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:40.550 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3375930 root 20 0 128.2g 48000 34560 S 0.0 0.1 0:09.98 reactor_1' 00:39:40.550 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3375930 root 20 0 128.2g 48000 34560 S 0.0 0.1 0:09.98 reactor_1 00:39:40.550 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:40.550 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:40.550 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:40.550 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:40.550 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:40.550 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:40.550 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:40.550 15:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:40.550 15:35:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:40.550 15:35:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:39:40.550 15:35:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:39:40.550 15:35:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:39:40.550 15:35:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:39:40.550 15:35:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:39:41.929 15:35:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:39:41.929 15:35:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:39:41.929 15:35:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:39:41.929 15:35:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:39:41.929 15:35:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:39:41.929 15:35:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:39:41.929 15:35:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:39:41.929 15:35:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3375824 0 00:39:41.929 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3375824 0 idle 00:39:41.929 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3375824 00:39:41.929 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:41.929 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:41.929 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:41.929 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:41.929 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:41.929 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:41.929 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:41.929 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:41.929 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:41.929 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3375824 -w 256 00:39:41.929 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:42.189 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3375824 root 20 0 128.2g 60288 34560 S 6.2 0.1 0:20.67 reactor_0' 00:39:42.189 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3375824 root 20 0 128.2g 60288 34560 S 6.2 0.1 0:20.67 reactor_0 00:39:42.189 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:42.189 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:42.189 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:39:42.189 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:39:42.189 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:42.189 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:42.189 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:42.189 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:42.189 15:35:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:39:42.189 15:35:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3375824 1 00:39:42.189 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3375824 1 idle 00:39:42.189 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3375824 00:39:42.190 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:42.190 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:42.190 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:42.190 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:42.190 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:42.190 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:42.190 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:42.190 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:42.190 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:42.190 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3375824 -w 256 00:39:42.190 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:42.190 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3375930 root 20 0 128.2g 60288 34560 S 0.0 0.1 0:10.04 reactor_1' 00:39:42.190 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3375930 root 20 0 128.2g 60288 34560 S 0.0 0.1 0:10.04 reactor_1 00:39:42.190 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:42.190 15:35:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:42.190 15:35:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:42.190 15:35:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:42.190 15:35:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:42.190 15:35:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:42.190 15:35:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:42.190 15:35:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:42.190 15:35:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:42.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:42.448 15:35:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:42.448 15:35:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:39:42.448 15:35:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:39:42.448 15:35:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:42.448 15:35:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:39:42.448 15:35:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:42.448 15:35:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:39:42.448 15:35:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:39:42.448 15:35:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:39:42.448 15:35:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:42.448 15:35:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:39:42.448 15:35:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:42.448 15:35:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:39:42.448 15:35:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:42.448 15:35:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:42.448 rmmod nvme_tcp 00:39:42.448 rmmod nvme_fabrics 00:39:42.448 rmmod nvme_keyring 00:39:42.448 15:35:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:42.448 15:35:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:39:42.448 15:35:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:39:42.448 15:35:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3375824 ']' 00:39:42.448 15:35:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3375824 00:39:42.448 15:35:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 3375824 ']' 00:39:42.448 15:35:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 3375824 00:39:42.448 15:35:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:39:42.448 15:35:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:42.448 15:35:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3375824 00:39:42.448 15:35:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:42.448 15:35:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:42.448 15:35:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3375824' 00:39:42.448 killing process with pid 3375824 00:39:42.448 15:35:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 3375824 00:39:42.448 15:35:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 3375824 00:39:43.025 15:35:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:43.025 15:35:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:43.025 15:35:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:43.025 15:35:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:39:43.025 15:35:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:39:43.025 15:35:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:43.025 15:35:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:39:43.025 15:35:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:43.025 15:35:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:43.025 15:35:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:43.025 15:35:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:43.025 15:35:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:44.936 15:35:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:44.936 00:39:44.936 real 0m20.489s 00:39:44.936 user 0m37.571s 00:39:44.936 sys 0m8.513s 00:39:44.936 15:35:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:44.936 15:35:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:44.936 ************************************ 00:39:44.936 END TEST nvmf_interrupt 00:39:44.936 ************************************ 00:39:44.936 00:39:44.936 real 32m3.492s 00:39:44.936 user 72m52.248s 00:39:44.936 sys 8m31.600s 00:39:44.936 15:35:31 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:44.936 15:35:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:44.936 ************************************ 00:39:44.936 END TEST nvmf_tcp 00:39:44.936 ************************************ 00:39:44.936 15:35:31 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:39:44.936 15:35:31 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:39:44.936 15:35:31 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:39:44.937 15:35:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:44.937 15:35:31 -- common/autotest_common.sh@10 -- # set +x 00:39:44.937 ************************************ 00:39:44.937 START TEST spdkcli_nvmf_tcp 00:39:44.937 ************************************ 00:39:44.937 15:35:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:39:45.198 * Looking for test storage... 00:39:45.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:39:45.198 15:35:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:39:45.198 15:35:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1689 -- # lcov --version 00:39:45.198 15:35:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:39:45.198 15:35:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:39:45.198 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:45.198 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:45.198 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:45.198 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:39:45.198 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:39:45.198 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:39:45.198 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:39:45.198 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:39:45.198 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:39:45.198 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:39:45.198 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:45.198 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:39:45.198 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:39:45.198 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:45.198 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:45.198 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:39:45.198 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:39:45.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:45.199 --rc genhtml_branch_coverage=1 00:39:45.199 --rc genhtml_function_coverage=1 00:39:45.199 --rc genhtml_legend=1 00:39:45.199 --rc geninfo_all_blocks=1 00:39:45.199 --rc geninfo_unexecuted_blocks=1 00:39:45.199 00:39:45.199 ' 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:39:45.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:45.199 --rc genhtml_branch_coverage=1 00:39:45.199 --rc genhtml_function_coverage=1 00:39:45.199 --rc genhtml_legend=1 00:39:45.199 --rc geninfo_all_blocks=1 00:39:45.199 --rc geninfo_unexecuted_blocks=1 00:39:45.199 00:39:45.199 ' 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:39:45.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:45.199 --rc genhtml_branch_coverage=1 00:39:45.199 --rc genhtml_function_coverage=1 00:39:45.199 --rc genhtml_legend=1 00:39:45.199 --rc geninfo_all_blocks=1 00:39:45.199 --rc geninfo_unexecuted_blocks=1 00:39:45.199 00:39:45.199 ' 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:39:45.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:45.199 --rc genhtml_branch_coverage=1 00:39:45.199 --rc genhtml_function_coverage=1 00:39:45.199 --rc genhtml_legend=1 00:39:45.199 --rc geninfo_all_blocks=1 00:39:45.199 --rc geninfo_unexecuted_blocks=1 00:39:45.199 00:39:45.199 ' 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:45.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3377975 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3377975 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 3377975 ']' 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:45.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:45.199 15:35:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:45.200 [2024-10-28 15:35:32.001348] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:39:45.200 [2024-10-28 15:35:32.001452] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3377975 ] 00:39:45.460 [2024-10-28 15:35:32.180509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:45.721 [2024-10-28 15:35:32.349411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:45.721 [2024-10-28 15:35:32.349421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:45.981 15:35:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:45.981 15:35:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:39:45.981 15:35:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:39:45.981 15:35:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:45.981 15:35:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:45.981 15:35:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:39:45.981 15:35:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:39:45.981 15:35:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:39:45.981 15:35:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:45.982 15:35:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:45.982 15:35:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:39:45.982 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:39:45.982 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:39:45.982 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:39:45.982 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:39:45.982 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:39:45.982 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:39:45.982 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:39:45.982 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:39:45.982 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:39:45.982 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:45.982 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:45.982 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:39:45.982 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:45.982 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:45.982 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:39:45.982 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:45.982 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:39:45.982 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:39:45.982 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:45.982 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:39:45.982 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:39:45.982 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:39:45.982 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:39:45.982 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:45.982 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:39:45.982 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:39:45.982 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:39:45.982 ' 00:39:49.280 [2024-10-28 15:35:35.938962] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:50.662 [2024-10-28 15:35:37.409556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:39:53.956 [2024-10-28 15:35:40.124722] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:39:55.865 [2024-10-28 15:35:42.425709] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:39:57.834 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:39:57.834 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:39:57.834 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:39:57.834 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:39:57.834 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:39:57.834 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:39:57.834 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:39:57.834 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:39:57.834 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:39:57.834 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:39:57.834 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:57.834 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:57.834 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:39:57.834 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:57.834 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:57.834 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:39:57.834 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:57.834 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:39:57.834 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:39:57.834 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:57.834 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:39:57.834 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:39:57.834 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:39:57.834 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:39:57.834 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:57.834 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:39:57.834 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:39:57.834 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:39:57.834 15:35:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:39:57.834 15:35:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:57.834 15:35:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:57.834 15:35:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:39:57.834 15:35:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:57.834 15:35:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:57.834 15:35:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:39:57.834 15:35:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:39:58.095 15:35:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:39:58.095 15:35:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:39:58.095 15:35:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:39:58.095 15:35:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:58.095 15:35:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:58.095 15:35:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:39:58.095 15:35:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:58.095 15:35:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:58.095 15:35:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:39:58.095 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:39:58.095 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:39:58.095 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:39:58.095 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:39:58.095 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:39:58.095 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:39:58.095 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:39:58.095 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:39:58.095 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:39:58.096 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:39:58.096 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:39:58.096 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:39:58.096 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:39:58.096 ' 00:40:04.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:40:04.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:40:04.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:04.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:40:04.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:40:04.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:40:04.672 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:40:04.672 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:04.672 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:40:04.672 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:40:04.672 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:40:04.672 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:40:04.672 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:40:04.672 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:40:04.672 15:35:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:40:04.672 15:35:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:04.672 15:35:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:04.672 15:35:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3377975 00:40:04.672 15:35:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3377975 ']' 00:40:04.672 15:35:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3377975 00:40:04.672 15:35:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:40:04.672 15:35:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:04.672 15:35:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3377975 00:40:04.673 15:35:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:04.673 15:35:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:04.673 15:35:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3377975' 00:40:04.673 killing process with pid 3377975 00:40:04.673 15:35:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 3377975 00:40:04.673 15:35:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 3377975 00:40:04.673 15:35:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:40:04.673 15:35:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:40:04.673 15:35:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3377975 ']' 00:40:04.673 15:35:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3377975 00:40:04.673 15:35:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3377975 ']' 00:40:04.673 15:35:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3377975 00:40:04.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3377975) - No such process 00:40:04.673 15:35:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 3377975 is not found' 00:40:04.673 Process with pid 3377975 is not found 00:40:04.673 15:35:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:40:04.673 15:35:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:40:04.673 15:35:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:40:04.673 00:40:04.673 real 0m19.469s 00:40:04.673 user 0m43.329s 00:40:04.673 sys 0m1.351s 00:40:04.673 15:35:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:04.673 15:35:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:04.673 ************************************ 00:40:04.673 END TEST spdkcli_nvmf_tcp 00:40:04.673 ************************************ 00:40:04.673 15:35:51 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:04.673 15:35:51 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:40:04.673 15:35:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:04.673 15:35:51 -- common/autotest_common.sh@10 -- # set +x 00:40:04.673 ************************************ 00:40:04.673 START TEST nvmf_identify_passthru 00:40:04.673 ************************************ 00:40:04.673 15:35:51 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:04.673 * Looking for test storage... 00:40:04.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:04.673 15:35:51 nvmf_identify_passthru -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:40:04.673 15:35:51 nvmf_identify_passthru -- common/autotest_common.sh@1689 -- # lcov --version 00:40:04.673 15:35:51 nvmf_identify_passthru -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:40:04.673 15:35:51 nvmf_identify_passthru -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:40:04.673 15:35:51 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:04.673 15:35:51 nvmf_identify_passthru -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:40:04.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:04.673 --rc genhtml_branch_coverage=1 00:40:04.673 --rc genhtml_function_coverage=1 00:40:04.673 --rc genhtml_legend=1 00:40:04.673 --rc geninfo_all_blocks=1 00:40:04.673 --rc geninfo_unexecuted_blocks=1 00:40:04.673 00:40:04.673 ' 00:40:04.673 15:35:51 nvmf_identify_passthru -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:40:04.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:04.673 --rc genhtml_branch_coverage=1 00:40:04.673 --rc genhtml_function_coverage=1 00:40:04.673 --rc genhtml_legend=1 00:40:04.673 --rc geninfo_all_blocks=1 00:40:04.673 --rc geninfo_unexecuted_blocks=1 00:40:04.673 00:40:04.673 ' 00:40:04.673 15:35:51 nvmf_identify_passthru -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:40:04.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:04.673 --rc genhtml_branch_coverage=1 00:40:04.673 --rc genhtml_function_coverage=1 00:40:04.673 --rc genhtml_legend=1 00:40:04.673 --rc geninfo_all_blocks=1 00:40:04.673 --rc geninfo_unexecuted_blocks=1 00:40:04.673 00:40:04.673 ' 00:40:04.673 15:35:51 nvmf_identify_passthru -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:40:04.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:04.673 --rc genhtml_branch_coverage=1 00:40:04.673 --rc genhtml_function_coverage=1 00:40:04.673 --rc genhtml_legend=1 00:40:04.673 --rc geninfo_all_blocks=1 00:40:04.673 --rc geninfo_unexecuted_blocks=1 00:40:04.673 00:40:04.673 ' 00:40:04.673 15:35:51 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:04.673 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:40:04.673 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:04.673 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:04.673 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:04.673 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:04.673 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:04.673 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:04.673 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:04.673 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:04.673 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:04.673 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:04.673 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:40:04.673 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:40:04.673 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:04.673 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:04.673 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:04.673 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:04.673 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:04.673 15:35:51 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:04.673 15:35:51 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.673 15:35:51 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.673 15:35:51 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.673 15:35:51 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:04.673 15:35:51 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.673 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:40:04.674 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:04.674 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:04.674 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:04.674 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:04.674 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:04.674 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:04.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:04.674 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:04.674 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:04.674 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:04.932 15:35:51 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:04.932 15:35:51 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:04.932 15:35:51 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:04.932 15:35:51 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:04.932 15:35:51 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:04.932 15:35:51 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.932 15:35:51 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.932 15:35:51 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.932 15:35:51 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:04.932 15:35:51 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.932 15:35:51 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:40:04.932 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:04.932 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:04.932 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:04.932 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:04.932 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:04.932 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:04.932 15:35:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:04.932 15:35:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:04.932 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:04.932 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:04.932 15:35:51 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:40:04.932 15:35:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:40:07.464 Found 0000:84:00.0 (0x8086 - 0x159b) 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:40:07.464 Found 0000:84:00.1 (0x8086 - 0x159b) 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:07.464 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:40:07.464 Found net devices under 0000:84:00.0: cvl_0_0 00:40:07.465 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:07.465 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:07.465 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:07.465 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:07.465 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:07.465 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:07.465 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:07.465 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:07.465 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:40:07.465 Found net devices under 0000:84:00.1: cvl_0_1 00:40:07.465 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:07.465 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:07.465 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:40:07.465 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:07.465 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:07.465 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:07.465 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:07.465 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:07.465 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:07.465 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:07.465 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:07.465 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:07.465 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:07.465 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:07.465 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:07.465 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:07.465 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:07.465 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:07.465 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:07.465 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:07.465 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:07.724 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:07.724 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:07.724 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:07.724 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:07.724 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:07.724 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:07.724 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:07.724 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:07.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:07.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:40:07.724 00:40:07.724 --- 10.0.0.2 ping statistics --- 00:40:07.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:07.724 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:40:07.724 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:07.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:07.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:40:07.724 00:40:07.724 --- 10.0.0.1 ping statistics --- 00:40:07.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:07.724 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:40:07.724 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:07.724 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:40:07.724 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:07.724 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:07.724 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:07.724 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:07.724 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:07.724 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:07.724 15:35:54 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:07.724 15:35:54 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:40:07.724 15:35:54 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:07.724 15:35:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:07.724 15:35:54 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:40:07.724 15:35:54 nvmf_identify_passthru -- common/autotest_common.sh@1505 -- # bdfs=() 00:40:07.724 15:35:54 nvmf_identify_passthru -- common/autotest_common.sh@1505 -- # local bdfs 00:40:07.724 15:35:54 nvmf_identify_passthru -- common/autotest_common.sh@1506 -- # bdfs=($(get_nvme_bdfs)) 00:40:07.724 15:35:54 nvmf_identify_passthru -- common/autotest_common.sh@1506 -- # get_nvme_bdfs 00:40:07.724 15:35:54 nvmf_identify_passthru -- common/autotest_common.sh@1494 -- # bdfs=() 00:40:07.724 15:35:54 nvmf_identify_passthru -- common/autotest_common.sh@1494 -- # local bdfs 00:40:07.724 15:35:54 nvmf_identify_passthru -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:07.724 15:35:54 nvmf_identify_passthru -- common/autotest_common.sh@1495 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:40:07.724 15:35:54 nvmf_identify_passthru -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:40:07.724 15:35:54 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # (( 1 == 0 )) 00:40:07.724 15:35:54 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:82:00.0 00:40:07.724 15:35:54 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # echo 0000:82:00.0 00:40:07.724 15:35:54 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:82:00.0 00:40:07.724 15:35:54 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:82:00.0 ']' 00:40:07.724 15:35:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:40:07.724 15:35:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:40:07.724 15:35:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:40:13.003 15:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ9142051K1P0FGN 00:40:13.003 15:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:40:13.003 15:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:40:13.003 15:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:40:17.201 15:36:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:40:17.201 15:36:03 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:40:17.201 15:36:03 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:17.201 15:36:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:17.201 15:36:03 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:40:17.201 15:36:03 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:17.201 15:36:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:17.201 15:36:03 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3383111 00:40:17.201 15:36:03 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:40:17.201 15:36:03 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:17.201 15:36:03 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3383111 00:40:17.201 15:36:03 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 3383111 ']' 00:40:17.201 15:36:03 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:17.201 15:36:03 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:17.201 15:36:03 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:17.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:17.201 15:36:03 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:17.201 15:36:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:17.201 [2024-10-28 15:36:03.461382] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:40:17.202 [2024-10-28 15:36:03.461569] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:17.202 [2024-10-28 15:36:03.644829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:17.202 [2024-10-28 15:36:03.779162] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:17.202 [2024-10-28 15:36:03.779271] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:17.202 [2024-10-28 15:36:03.779307] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:17.202 [2024-10-28 15:36:03.779340] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:17.202 [2024-10-28 15:36:03.779366] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:17.202 [2024-10-28 15:36:03.783281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:17.202 [2024-10-28 15:36:03.783384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:17.202 [2024-10-28 15:36:03.783481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:17.202 [2024-10-28 15:36:03.783484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:17.202 15:36:03 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:17.202 15:36:03 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:40:17.202 15:36:03 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:40:17.202 15:36:03 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:17.202 15:36:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:17.202 INFO: Log level set to 20 00:40:17.202 INFO: Requests: 00:40:17.202 { 00:40:17.202 "jsonrpc": "2.0", 00:40:17.202 "method": "nvmf_set_config", 00:40:17.202 "id": 1, 00:40:17.202 "params": { 00:40:17.202 "admin_cmd_passthru": { 00:40:17.202 "identify_ctrlr": true 00:40:17.202 } 00:40:17.202 } 00:40:17.202 } 00:40:17.202 00:40:17.202 INFO: response: 00:40:17.202 { 00:40:17.202 "jsonrpc": "2.0", 00:40:17.202 "id": 1, 00:40:17.202 "result": true 00:40:17.202 } 00:40:17.202 00:40:17.202 15:36:03 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:17.202 15:36:03 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:40:17.202 15:36:03 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:17.202 15:36:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:17.202 INFO: Setting log level to 20 00:40:17.202 INFO: Setting log level to 20 00:40:17.202 INFO: Log level set to 20 00:40:17.202 INFO: Log level set to 20 00:40:17.202 INFO: Requests: 00:40:17.202 { 00:40:17.202 "jsonrpc": "2.0", 00:40:17.202 "method": "framework_start_init", 00:40:17.202 "id": 1 00:40:17.202 } 00:40:17.202 00:40:17.202 INFO: Requests: 00:40:17.202 { 00:40:17.202 "jsonrpc": "2.0", 00:40:17.202 "method": "framework_start_init", 00:40:17.202 "id": 1 00:40:17.202 } 00:40:17.202 00:40:17.460 [2024-10-28 15:36:04.106998] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:40:17.460 INFO: response: 00:40:17.460 { 00:40:17.460 "jsonrpc": "2.0", 00:40:17.460 "id": 1, 00:40:17.460 "result": true 00:40:17.460 } 00:40:17.460 00:40:17.460 INFO: response: 00:40:17.460 { 00:40:17.460 "jsonrpc": "2.0", 00:40:17.460 "id": 1, 00:40:17.460 "result": true 00:40:17.460 } 00:40:17.460 00:40:17.460 15:36:04 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:17.460 15:36:04 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:17.460 15:36:04 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:17.460 15:36:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:17.460 INFO: Setting log level to 40 00:40:17.460 INFO: Setting log level to 40 00:40:17.460 INFO: Setting log level to 40 00:40:17.460 [2024-10-28 15:36:04.117412] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:17.460 15:36:04 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:17.460 15:36:04 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:40:17.460 15:36:04 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:17.460 15:36:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:17.460 15:36:04 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:82:00.0 00:40:17.460 15:36:04 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:17.460 15:36:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:20.744 Nvme0n1 00:40:20.744 15:36:07 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:20.744 15:36:07 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:40:20.744 15:36:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:20.744 15:36:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:20.744 15:36:07 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:20.744 15:36:07 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:40:20.744 15:36:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:20.744 15:36:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:20.744 15:36:07 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:20.744 15:36:07 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:20.744 15:36:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:20.744 15:36:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:20.744 [2024-10-28 15:36:07.036076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:20.744 15:36:07 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:20.744 15:36:07 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:40:20.744 15:36:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:20.744 15:36:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:20.744 [ 00:40:20.744 { 00:40:20.744 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:40:20.744 "subtype": "Discovery", 00:40:20.744 "listen_addresses": [], 00:40:20.744 "allow_any_host": true, 00:40:20.744 "hosts": [] 00:40:20.744 }, 00:40:20.744 { 00:40:20.744 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:20.744 "subtype": "NVMe", 00:40:20.744 "listen_addresses": [ 00:40:20.744 { 00:40:20.744 "trtype": "TCP", 00:40:20.744 "adrfam": "IPv4", 00:40:20.744 "traddr": "10.0.0.2", 00:40:20.744 "trsvcid": "4420" 00:40:20.744 } 00:40:20.744 ], 00:40:20.744 "allow_any_host": true, 00:40:20.744 "hosts": [], 00:40:20.744 "serial_number": "SPDK00000000000001", 00:40:20.744 "model_number": "SPDK bdev Controller", 00:40:20.744 "max_namespaces": 1, 00:40:20.744 "min_cntlid": 1, 00:40:20.744 "max_cntlid": 65519, 00:40:20.744 "namespaces": [ 00:40:20.744 { 00:40:20.744 "nsid": 1, 00:40:20.744 "bdev_name": "Nvme0n1", 00:40:20.744 "name": "Nvme0n1", 00:40:20.744 "nguid": "925F07E3B4F749B2A0F4BA2221F38B23", 00:40:20.744 "uuid": "925f07e3-b4f7-49b2-a0f4-ba2221f38b23" 00:40:20.744 } 00:40:20.744 ] 00:40:20.744 } 00:40:20.744 ] 00:40:20.744 15:36:07 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:20.744 15:36:07 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:20.744 15:36:07 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:40:20.744 15:36:07 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:40:20.744 15:36:07 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ9142051K1P0FGN 00:40:20.744 15:36:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:20.744 15:36:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:40:20.744 15:36:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:40:20.744 15:36:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:40:20.744 15:36:07 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ9142051K1P0FGN '!=' BTLJ9142051K1P0FGN ']' 00:40:20.744 15:36:07 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:40:20.744 15:36:07 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:20.744 15:36:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:20.744 15:36:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:20.744 15:36:07 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:20.744 15:36:07 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:40:20.744 15:36:07 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:40:20.744 15:36:07 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:20.744 15:36:07 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:40:20.744 15:36:07 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:20.744 15:36:07 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:40:20.744 15:36:07 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:20.744 15:36:07 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:20.744 rmmod nvme_tcp 00:40:20.744 rmmod nvme_fabrics 00:40:20.744 rmmod nvme_keyring 00:40:20.744 15:36:07 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:20.744 15:36:07 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:40:20.744 15:36:07 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:40:20.744 15:36:07 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3383111 ']' 00:40:20.744 15:36:07 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3383111 00:40:20.744 15:36:07 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 3383111 ']' 00:40:20.744 15:36:07 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 3383111 00:40:20.744 15:36:07 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:40:20.744 15:36:07 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:20.744 15:36:07 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3383111 00:40:21.001 15:36:07 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:21.001 15:36:07 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:21.001 15:36:07 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3383111' 00:40:21.001 killing process with pid 3383111 00:40:21.001 15:36:07 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 3383111 00:40:21.001 15:36:07 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 3383111 00:40:22.906 15:36:09 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:22.906 15:36:09 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:22.906 15:36:09 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:22.906 15:36:09 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:40:22.906 15:36:09 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:40:22.906 15:36:09 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:22.906 15:36:09 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:40:22.906 15:36:09 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:22.906 15:36:09 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:22.906 15:36:09 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:22.906 15:36:09 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:22.906 15:36:09 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:24.802 15:36:11 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:24.802 00:40:24.802 real 0m20.069s 00:40:24.802 user 0m28.343s 00:40:24.802 sys 0m4.278s 00:40:24.802 15:36:11 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:24.802 15:36:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:24.802 ************************************ 00:40:24.802 END TEST nvmf_identify_passthru 00:40:24.802 ************************************ 00:40:24.802 15:36:11 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:24.802 15:36:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:24.802 15:36:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:24.802 15:36:11 -- common/autotest_common.sh@10 -- # set +x 00:40:24.802 ************************************ 00:40:24.802 START TEST nvmf_dif 00:40:24.802 ************************************ 00:40:24.802 15:36:11 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:24.802 * Looking for test storage... 00:40:24.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:24.802 15:36:11 nvmf_dif -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:40:24.802 15:36:11 nvmf_dif -- common/autotest_common.sh@1689 -- # lcov --version 00:40:24.802 15:36:11 nvmf_dif -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:40:25.060 15:36:11 nvmf_dif -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:40:25.060 15:36:11 nvmf_dif -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:25.060 15:36:11 nvmf_dif -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:40:25.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:25.060 --rc genhtml_branch_coverage=1 00:40:25.060 --rc genhtml_function_coverage=1 00:40:25.060 --rc genhtml_legend=1 00:40:25.060 --rc geninfo_all_blocks=1 00:40:25.060 --rc geninfo_unexecuted_blocks=1 00:40:25.060 00:40:25.060 ' 00:40:25.060 15:36:11 nvmf_dif -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:40:25.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:25.060 --rc genhtml_branch_coverage=1 00:40:25.060 --rc genhtml_function_coverage=1 00:40:25.060 --rc genhtml_legend=1 00:40:25.060 --rc geninfo_all_blocks=1 00:40:25.060 --rc geninfo_unexecuted_blocks=1 00:40:25.060 00:40:25.060 ' 00:40:25.060 15:36:11 nvmf_dif -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:40:25.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:25.060 --rc genhtml_branch_coverage=1 00:40:25.060 --rc genhtml_function_coverage=1 00:40:25.060 --rc genhtml_legend=1 00:40:25.060 --rc geninfo_all_blocks=1 00:40:25.060 --rc geninfo_unexecuted_blocks=1 00:40:25.060 00:40:25.060 ' 00:40:25.060 15:36:11 nvmf_dif -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:40:25.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:25.060 --rc genhtml_branch_coverage=1 00:40:25.060 --rc genhtml_function_coverage=1 00:40:25.060 --rc genhtml_legend=1 00:40:25.060 --rc geninfo_all_blocks=1 00:40:25.060 --rc geninfo_unexecuted_blocks=1 00:40:25.060 00:40:25.060 ' 00:40:25.060 15:36:11 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:25.060 15:36:11 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:40:25.060 15:36:11 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:25.060 15:36:11 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:25.060 15:36:11 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:25.060 15:36:11 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:25.060 15:36:11 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:25.060 15:36:11 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:25.060 15:36:11 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:25.060 15:36:11 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:25.060 15:36:11 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:25.060 15:36:11 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:25.060 15:36:11 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:40:25.060 15:36:11 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:40:25.060 15:36:11 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:25.060 15:36:11 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:25.060 15:36:11 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:25.060 15:36:11 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:25.060 15:36:11 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:25.060 15:36:11 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:25.060 15:36:11 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:25.061 15:36:11 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:25.061 15:36:11 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:25.061 15:36:11 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:40:25.061 15:36:11 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:25.061 15:36:11 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:40:25.061 15:36:11 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:25.061 15:36:11 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:25.061 15:36:11 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:25.061 15:36:11 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:25.061 15:36:11 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:25.061 15:36:11 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:25.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:25.061 15:36:11 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:25.061 15:36:11 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:25.061 15:36:11 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:25.061 15:36:11 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:40:25.061 15:36:11 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:40:25.061 15:36:11 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:40:25.061 15:36:11 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:40:25.061 15:36:11 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:40:25.061 15:36:11 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:25.061 15:36:11 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:25.061 15:36:11 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:25.061 15:36:11 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:25.061 15:36:11 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:25.061 15:36:11 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:25.061 15:36:11 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:25.061 15:36:11 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:25.061 15:36:11 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:25.061 15:36:11 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:25.061 15:36:11 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:40:25.061 15:36:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:40:28.349 Found 0000:84:00.0 (0x8086 - 0x159b) 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:40:28.349 Found 0000:84:00.1 (0x8086 - 0x159b) 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:40:28.349 Found net devices under 0000:84:00.0: cvl_0_0 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:40:28.349 Found net devices under 0000:84:00.1: cvl_0_1 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:28.349 15:36:14 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:28.350 15:36:14 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:28.350 15:36:14 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:28.350 15:36:14 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:28.350 15:36:14 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:28.350 15:36:14 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:28.350 15:36:14 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:28.350 15:36:14 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:28.350 15:36:14 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:28.350 15:36:14 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:28.350 15:36:14 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:28.350 15:36:14 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:28.350 15:36:14 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:28.350 15:36:14 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:28.350 15:36:14 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:28.350 15:36:14 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:28.350 15:36:14 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:28.350 15:36:14 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:28.350 15:36:14 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:28.350 15:36:14 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:28.350 15:36:14 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:28.350 15:36:14 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:28.350 15:36:14 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:28.350 15:36:14 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:28.350 15:36:14 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:28.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:28.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:40:28.350 00:40:28.350 --- 10.0.0.2 ping statistics --- 00:40:28.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:28.350 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:40:28.350 15:36:14 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:28.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:28.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:40:28.350 00:40:28.350 --- 10.0.0.1 ping statistics --- 00:40:28.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:28.350 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:40:28.350 15:36:14 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:28.350 15:36:14 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:40:28.350 15:36:14 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:40:28.350 15:36:14 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:29.754 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:40:29.754 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:40:29.754 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:40:29.754 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:40:29.754 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:40:29.754 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:40:29.754 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:40:29.754 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:40:29.754 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:40:29.754 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:40:29.754 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:40:29.754 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:40:29.754 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:40:29.754 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:40:29.754 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:40:29.754 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:40:29.754 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:40:30.013 15:36:16 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:30.013 15:36:16 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:30.013 15:36:16 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:30.013 15:36:16 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:30.013 15:36:16 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:30.013 15:36:16 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:30.013 15:36:16 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:40:30.013 15:36:16 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:40:30.013 15:36:16 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:30.013 15:36:16 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:30.013 15:36:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:30.013 15:36:16 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3387047 00:40:30.013 15:36:16 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:40:30.013 15:36:16 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3387047 00:40:30.013 15:36:16 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 3387047 ']' 00:40:30.013 15:36:16 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:30.013 15:36:16 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:30.013 15:36:16 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:30.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:30.013 15:36:16 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:30.013 15:36:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:30.013 [2024-10-28 15:36:16.851822] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:40:30.013 [2024-10-28 15:36:16.852000] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:30.272 [2024-10-28 15:36:16.987369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:30.272 [2024-10-28 15:36:17.062854] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:30.272 [2024-10-28 15:36:17.062934] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:30.272 [2024-10-28 15:36:17.062955] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:30.272 [2024-10-28 15:36:17.062971] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:30.272 [2024-10-28 15:36:17.062986] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:30.272 [2024-10-28 15:36:17.063868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:30.839 15:36:17 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:30.839 15:36:17 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:40:30.839 15:36:17 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:30.840 15:36:17 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:30.840 15:36:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:30.840 15:36:17 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:30.840 15:36:17 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:40:30.840 15:36:17 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:40:30.840 15:36:17 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:30.840 15:36:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:30.840 [2024-10-28 15:36:17.443481] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:30.840 15:36:17 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:30.840 15:36:17 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:40:30.840 15:36:17 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:30.840 15:36:17 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:30.840 15:36:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:30.840 ************************************ 00:40:30.840 START TEST fio_dif_1_default 00:40:30.840 ************************************ 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:30.840 bdev_null0 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:30.840 [2024-10-28 15:36:17.511869] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:30.840 { 00:40:30.840 "params": { 00:40:30.840 "name": "Nvme$subsystem", 00:40:30.840 "trtype": "$TEST_TRANSPORT", 00:40:30.840 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:30.840 "adrfam": "ipv4", 00:40:30.840 "trsvcid": "$NVMF_PORT", 00:40:30.840 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:30.840 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:30.840 "hdgst": ${hdgst:-false}, 00:40:30.840 "ddgst": ${ddgst:-false} 00:40:30.840 }, 00:40:30.840 "method": "bdev_nvme_attach_controller" 00:40:30.840 } 00:40:30.840 EOF 00:40:30.840 )") 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:30.840 "params": { 00:40:30.840 "name": "Nvme0", 00:40:30.840 "trtype": "tcp", 00:40:30.840 "traddr": "10.0.0.2", 00:40:30.840 "adrfam": "ipv4", 00:40:30.840 "trsvcid": "4420", 00:40:30.840 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:30.840 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:30.840 "hdgst": false, 00:40:30.840 "ddgst": false 00:40:30.840 }, 00:40:30.840 "method": "bdev_nvme_attach_controller" 00:40:30.840 }' 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:30.840 15:36:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:31.098 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:31.098 fio-3.35 00:40:31.098 Starting 1 thread 00:40:43.295 00:40:43.295 filename0: (groupid=0, jobs=1): err= 0: pid=3387278: Mon Oct 28 15:36:28 2024 00:40:43.295 read: IOPS=99, BW=396KiB/s (406kB/s)(3968KiB/10008msec) 00:40:43.295 slat (nsec): min=4488, max=46109, avg=13668.28, stdev=5517.38 00:40:43.295 clat (usec): min=758, max=42977, avg=40309.00, stdev=5663.87 00:40:43.295 lat (usec): min=769, max=42993, avg=40322.66, stdev=5663.92 00:40:43.295 clat percentiles (usec): 00:40:43.295 | 1.00th=[ 881], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:40:43.295 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:43.295 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:40:43.295 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:40:43.295 | 99.99th=[42730] 00:40:43.295 bw ( KiB/s): min= 383, max= 448, per=99.63%, avg=395.15, stdev=18.82, samples=20 00:40:43.295 iops : min= 95, max= 112, avg=98.75, stdev= 4.73, samples=20 00:40:43.295 lat (usec) : 1000=1.61% 00:40:43.295 lat (msec) : 2=0.40%, 50=97.98% 00:40:43.295 cpu : usr=90.97%, sys=8.64%, ctx=9, majf=0, minf=9 00:40:43.295 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:43.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.295 issued rwts: total=992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:43.295 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:43.295 00:40:43.295 Run status group 0 (all jobs): 00:40:43.295 READ: bw=396KiB/s (406kB/s), 396KiB/s-396KiB/s (406kB/s-406kB/s), io=3968KiB (4063kB), run=10008-10008msec 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.295 00:40:43.295 real 0m11.662s 00:40:43.295 user 0m10.624s 00:40:43.295 sys 0m1.329s 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:43.295 ************************************ 00:40:43.295 END TEST fio_dif_1_default 00:40:43.295 ************************************ 00:40:43.295 15:36:29 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:40:43.295 15:36:29 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:43.295 15:36:29 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:43.295 15:36:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:43.295 ************************************ 00:40:43.295 START TEST fio_dif_1_multi_subsystems 00:40:43.295 ************************************ 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:43.295 bdev_null0 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:43.295 [2024-10-28 15:36:29.243267] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:43.295 bdev_null1 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.295 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:43.296 { 00:40:43.296 "params": { 00:40:43.296 "name": "Nvme$subsystem", 00:40:43.296 "trtype": "$TEST_TRANSPORT", 00:40:43.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:43.296 "adrfam": "ipv4", 00:40:43.296 "trsvcid": "$NVMF_PORT", 00:40:43.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:43.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:43.296 "hdgst": ${hdgst:-false}, 00:40:43.296 "ddgst": ${ddgst:-false} 00:40:43.296 }, 00:40:43.296 "method": "bdev_nvme_attach_controller" 00:40:43.296 } 00:40:43.296 EOF 00:40:43.296 )") 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:43.296 { 00:40:43.296 "params": { 00:40:43.296 "name": "Nvme$subsystem", 00:40:43.296 "trtype": "$TEST_TRANSPORT", 00:40:43.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:43.296 "adrfam": "ipv4", 00:40:43.296 "trsvcid": "$NVMF_PORT", 00:40:43.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:43.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:43.296 "hdgst": ${hdgst:-false}, 00:40:43.296 "ddgst": ${ddgst:-false} 00:40:43.296 }, 00:40:43.296 "method": "bdev_nvme_attach_controller" 00:40:43.296 } 00:40:43.296 EOF 00:40:43.296 )") 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:43.296 "params": { 00:40:43.296 "name": "Nvme0", 00:40:43.296 "trtype": "tcp", 00:40:43.296 "traddr": "10.0.0.2", 00:40:43.296 "adrfam": "ipv4", 00:40:43.296 "trsvcid": "4420", 00:40:43.296 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:43.296 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:43.296 "hdgst": false, 00:40:43.296 "ddgst": false 00:40:43.296 }, 00:40:43.296 "method": "bdev_nvme_attach_controller" 00:40:43.296 },{ 00:40:43.296 "params": { 00:40:43.296 "name": "Nvme1", 00:40:43.296 "trtype": "tcp", 00:40:43.296 "traddr": "10.0.0.2", 00:40:43.296 "adrfam": "ipv4", 00:40:43.296 "trsvcid": "4420", 00:40:43.296 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:43.296 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:43.296 "hdgst": false, 00:40:43.296 "ddgst": false 00:40:43.296 }, 00:40:43.296 "method": "bdev_nvme_attach_controller" 00:40:43.296 }' 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:43.296 15:36:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:43.296 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:43.296 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:43.296 fio-3.35 00:40:43.296 Starting 2 threads 00:40:55.518 00:40:55.518 filename0: (groupid=0, jobs=1): err= 0: pid=3388794: Mon Oct 28 15:36:40 2024 00:40:55.518 read: IOPS=188, BW=756KiB/s (774kB/s)(7584KiB/10033msec) 00:40:55.518 slat (nsec): min=7548, max=29158, avg=9798.37, stdev=3358.33 00:40:55.518 clat (usec): min=658, max=44065, avg=21134.27, stdev=20446.05 00:40:55.518 lat (usec): min=666, max=44092, avg=21144.06, stdev=20445.79 00:40:55.518 clat percentiles (usec): 00:40:55.518 | 1.00th=[ 717], 5.00th=[ 1020], 10.00th=[ 1074], 20.00th=[ 1106], 00:40:55.518 | 30.00th=[ 1123], 40.00th=[ 1172], 50.00th=[ 1844], 60.00th=[41681], 00:40:55.518 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42730], 00:40:55.518 | 99.00th=[42730], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:40:55.518 | 99.99th=[44303] 00:40:55.518 bw ( KiB/s): min= 640, max= 832, per=66.59%, avg=756.80, stdev=50.09, samples=20 00:40:55.518 iops : min= 160, max= 208, avg=189.20, stdev=12.52, samples=20 00:40:55.518 lat (usec) : 750=1.74%, 1000=2.37% 00:40:55.518 lat (msec) : 2=46.68%, 4=0.26%, 50=48.95% 00:40:55.518 cpu : usr=94.93%, sys=4.66%, ctx=30, majf=0, minf=9 00:40:55.518 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:55.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:55.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:55.518 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:55.518 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:55.518 filename1: (groupid=0, jobs=1): err= 0: pid=3388795: Mon Oct 28 15:36:40 2024 00:40:55.518 read: IOPS=94, BW=379KiB/s (389kB/s)(3808KiB/10035msec) 00:40:55.518 slat (nsec): min=7509, max=44556, avg=10381.18, stdev=4223.46 00:40:55.518 clat (usec): min=41432, max=43991, avg=42127.99, stdev=428.36 00:40:55.518 lat (usec): min=41441, max=44007, avg=42138.37, stdev=428.88 00:40:55.518 clat percentiles (usec): 00:40:55.518 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:40:55.518 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:40:55.518 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:40:55.518 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:40:55.518 | 99.99th=[43779] 00:40:55.518 bw ( KiB/s): min= 352, max= 384, per=33.39%, avg=379.20, stdev=11.72, samples=20 00:40:55.518 iops : min= 88, max= 96, avg=94.80, stdev= 2.93, samples=20 00:40:55.518 lat (msec) : 50=100.00% 00:40:55.518 cpu : usr=95.21%, sys=4.40%, ctx=13, majf=0, minf=0 00:40:55.518 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:55.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:55.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:55.518 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:55.518 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:55.518 00:40:55.518 Run status group 0 (all jobs): 00:40:55.518 READ: bw=1135KiB/s (1162kB/s), 379KiB/s-756KiB/s (389kB/s-774kB/s), io=11.1MiB (11.7MB), run=10033-10035msec 00:40:55.518 15:36:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:40:55.518 15:36:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:40:55.518 15:36:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:55.518 15:36:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:55.518 15:36:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:40:55.518 15:36:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:55.518 15:36:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.518 15:36:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:55.518 15:36:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.518 15:36:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:55.518 15:36:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.518 15:36:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:55.518 15:36:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.518 15:36:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:55.518 15:36:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:55.518 15:36:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:40:55.518 15:36:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:55.518 15:36:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.518 15:36:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:55.518 15:36:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.518 15:36:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:55.518 15:36:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.518 15:36:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:55.518 15:36:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.518 00:40:55.518 real 0m12.135s 00:40:55.518 user 0m20.991s 00:40:55.518 sys 0m1.382s 00:40:55.518 15:36:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:55.518 15:36:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:55.518 ************************************ 00:40:55.518 END TEST fio_dif_1_multi_subsystems 00:40:55.518 ************************************ 00:40:55.518 15:36:41 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:40:55.518 15:36:41 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:55.518 15:36:41 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:55.518 15:36:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:55.518 ************************************ 00:40:55.518 START TEST fio_dif_rand_params 00:40:55.518 ************************************ 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:55.518 bdev_null0 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:55.518 [2024-10-28 15:36:41.469539] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:55.518 15:36:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:55.518 { 00:40:55.518 "params": { 00:40:55.518 "name": "Nvme$subsystem", 00:40:55.518 "trtype": "$TEST_TRANSPORT", 00:40:55.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:55.518 "adrfam": "ipv4", 00:40:55.518 "trsvcid": "$NVMF_PORT", 00:40:55.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:55.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:55.518 "hdgst": ${hdgst:-false}, 00:40:55.518 "ddgst": ${ddgst:-false} 00:40:55.518 }, 00:40:55.518 "method": "bdev_nvme_attach_controller" 00:40:55.518 } 00:40:55.518 EOF 00:40:55.518 )") 00:40:55.519 15:36:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:55.519 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:55.519 15:36:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:55.519 15:36:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:55.519 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:55.519 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:55.519 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:55.519 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:55.519 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:40:55.519 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:55.519 15:36:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:40:55.519 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:55.519 15:36:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:55.519 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:55.519 15:36:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:55.519 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:55.519 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:40:55.519 15:36:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:40:55.519 15:36:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:40:55.519 15:36:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:55.519 "params": { 00:40:55.519 "name": "Nvme0", 00:40:55.519 "trtype": "tcp", 00:40:55.519 "traddr": "10.0.0.2", 00:40:55.519 "adrfam": "ipv4", 00:40:55.519 "trsvcid": "4420", 00:40:55.519 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:55.519 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:55.519 "hdgst": false, 00:40:55.519 "ddgst": false 00:40:55.519 }, 00:40:55.519 "method": "bdev_nvme_attach_controller" 00:40:55.519 }' 00:40:55.519 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:55.519 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:55.519 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:55.519 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:55.519 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:40:55.519 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:55.519 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:55.519 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:55.519 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:55.519 15:36:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:55.519 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:40:55.519 ... 00:40:55.519 fio-3.35 00:40:55.519 Starting 3 threads 00:41:00.809 00:41:00.809 filename0: (groupid=0, jobs=1): err= 0: pid=3390072: Mon Oct 28 15:36:47 2024 00:41:00.809 read: IOPS=112, BW=14.1MiB/s (14.8MB/s)(70.6MiB/5011msec) 00:41:00.809 slat (nsec): min=5315, max=68514, avg=33669.49, stdev=11039.39 00:41:00.809 clat (usec): min=7453, max=76410, avg=26559.47, stdev=8407.95 00:41:00.809 lat (usec): min=7469, max=76451, avg=26593.14, stdev=8409.96 00:41:00.809 clat percentiles (usec): 00:41:00.809 | 1.00th=[ 9634], 5.00th=[13042], 10.00th=[16909], 20.00th=[20317], 00:41:00.809 | 30.00th=[22938], 40.00th=[25035], 50.00th=[27395], 60.00th=[28967], 00:41:00.809 | 70.00th=[30278], 80.00th=[31327], 90.00th=[32637], 95.00th=[34341], 00:41:00.810 | 99.00th=[67634], 99.50th=[71828], 99.90th=[76022], 99.95th=[76022], 00:41:00.810 | 99.99th=[76022] 00:41:00.810 bw ( KiB/s): min=12288, max=18944, per=34.62%, avg=14387.20, stdev=1978.56, samples=10 00:41:00.810 iops : min= 96, max= 148, avg=112.40, stdev=15.46, samples=10 00:41:00.810 lat (msec) : 10=1.06%, 20=16.99%, 50=79.82%, 100=2.12% 00:41:00.810 cpu : usr=93.55%, sys=5.47%, ctx=39, majf=0, minf=2 00:41:00.810 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:00.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.810 issued rwts: total=565,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.810 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:00.810 filename0: (groupid=0, jobs=1): err= 0: pid=3390073: Mon Oct 28 15:36:47 2024 00:41:00.810 read: IOPS=107, BW=13.4MiB/s (14.1MB/s)(67.4MiB/5011msec) 00:41:00.810 slat (nsec): min=5854, max=39798, avg=18097.63, stdev=5418.09 00:41:00.810 clat (usec): min=6740, max=69723, avg=27849.65, stdev=13028.31 00:41:00.810 lat (usec): min=6754, max=69748, avg=27867.74, stdev=13028.45 00:41:00.810 clat percentiles (usec): 00:41:00.810 | 1.00th=[ 7701], 5.00th=[12649], 10.00th=[17433], 20.00th=[20841], 00:41:00.810 | 30.00th=[22676], 40.00th=[24249], 50.00th=[25297], 60.00th=[26608], 00:41:00.810 | 70.00th=[27919], 80.00th=[29230], 90.00th=[52691], 95.00th=[64750], 00:41:00.810 | 99.00th=[68682], 99.50th=[68682], 99.90th=[69731], 99.95th=[69731], 00:41:00.810 | 99.99th=[69731] 00:41:00.810 bw ( KiB/s): min=10752, max=17408, per=33.02%, avg=13721.60, stdev=2268.01, samples=10 00:41:00.810 iops : min= 84, max= 136, avg=107.20, stdev=17.72, samples=10 00:41:00.810 lat (msec) : 10=2.04%, 20=14.84%, 50=73.10%, 100=10.02% 00:41:00.810 cpu : usr=97.05%, sys=2.46%, ctx=9, majf=0, minf=1 00:41:00.810 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:00.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.810 issued rwts: total=539,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.810 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:00.810 filename0: (groupid=0, jobs=1): err= 0: pid=3390074: Mon Oct 28 15:36:47 2024 00:41:00.810 read: IOPS=105, BW=13.2MiB/s (13.9MB/s)(66.9MiB/5048msec) 00:41:00.810 slat (nsec): min=5673, max=38398, avg=18342.02, stdev=5336.25 00:41:00.810 clat (usec): min=10647, max=61500, avg=28198.01, stdev=8710.76 00:41:00.810 lat (usec): min=10680, max=61513, avg=28216.35, stdev=8711.01 00:41:00.810 clat percentiles (usec): 00:41:00.810 | 1.00th=[12256], 5.00th=[14222], 10.00th=[17171], 20.00th=[19792], 00:41:00.810 | 30.00th=[22938], 40.00th=[25560], 50.00th=[29230], 60.00th=[31589], 00:41:00.810 | 70.00th=[32900], 80.00th=[34341], 90.00th=[37487], 95.00th=[41681], 00:41:00.810 | 99.00th=[53216], 99.50th=[59507], 99.90th=[61604], 99.95th=[61604], 00:41:00.810 | 99.99th=[61604] 00:41:00.810 bw ( KiB/s): min=11520, max=18432, per=32.77%, avg=13619.20, stdev=2057.93, samples=10 00:41:00.810 iops : min= 90, max= 144, avg=106.40, stdev=16.08, samples=10 00:41:00.810 lat (msec) : 20=20.37%, 50=77.94%, 100=1.68% 00:41:00.810 cpu : usr=96.71%, sys=2.81%, ctx=6, majf=0, minf=0 00:41:00.810 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:00.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.810 issued rwts: total=535,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.810 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:00.810 00:41:00.810 Run status group 0 (all jobs): 00:41:00.810 READ: bw=40.6MiB/s (42.6MB/s), 13.2MiB/s-14.1MiB/s (13.9MB/s-14.8MB/s), io=205MiB (215MB), run=5011-5048msec 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:01.383 bdev_null0 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:01.383 [2024-10-28 15:36:48.074583] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:01.383 bdev_null1 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:01.383 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:01.384 bdev_null2 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:01.384 { 00:41:01.384 "params": { 00:41:01.384 "name": "Nvme$subsystem", 00:41:01.384 "trtype": "$TEST_TRANSPORT", 00:41:01.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:01.384 "adrfam": "ipv4", 00:41:01.384 "trsvcid": "$NVMF_PORT", 00:41:01.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:01.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:01.384 "hdgst": ${hdgst:-false}, 00:41:01.384 "ddgst": ${ddgst:-false} 00:41:01.384 }, 00:41:01.384 "method": "bdev_nvme_attach_controller" 00:41:01.384 } 00:41:01.384 EOF 00:41:01.384 )") 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:01.384 { 00:41:01.384 "params": { 00:41:01.384 "name": "Nvme$subsystem", 00:41:01.384 "trtype": "$TEST_TRANSPORT", 00:41:01.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:01.384 "adrfam": "ipv4", 00:41:01.384 "trsvcid": "$NVMF_PORT", 00:41:01.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:01.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:01.384 "hdgst": ${hdgst:-false}, 00:41:01.384 "ddgst": ${ddgst:-false} 00:41:01.384 }, 00:41:01.384 "method": "bdev_nvme_attach_controller" 00:41:01.384 } 00:41:01.384 EOF 00:41:01.384 )") 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:01.384 { 00:41:01.384 "params": { 00:41:01.384 "name": "Nvme$subsystem", 00:41:01.384 "trtype": "$TEST_TRANSPORT", 00:41:01.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:01.384 "adrfam": "ipv4", 00:41:01.384 "trsvcid": "$NVMF_PORT", 00:41:01.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:01.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:01.384 "hdgst": ${hdgst:-false}, 00:41:01.384 "ddgst": ${ddgst:-false} 00:41:01.384 }, 00:41:01.384 "method": "bdev_nvme_attach_controller" 00:41:01.384 } 00:41:01.384 EOF 00:41:01.384 )") 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:01.384 "params": { 00:41:01.384 "name": "Nvme0", 00:41:01.384 "trtype": "tcp", 00:41:01.384 "traddr": "10.0.0.2", 00:41:01.384 "adrfam": "ipv4", 00:41:01.384 "trsvcid": "4420", 00:41:01.384 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:01.384 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:01.384 "hdgst": false, 00:41:01.384 "ddgst": false 00:41:01.384 }, 00:41:01.384 "method": "bdev_nvme_attach_controller" 00:41:01.384 },{ 00:41:01.384 "params": { 00:41:01.384 "name": "Nvme1", 00:41:01.384 "trtype": "tcp", 00:41:01.384 "traddr": "10.0.0.2", 00:41:01.384 "adrfam": "ipv4", 00:41:01.384 "trsvcid": "4420", 00:41:01.384 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:01.384 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:01.384 "hdgst": false, 00:41:01.384 "ddgst": false 00:41:01.384 }, 00:41:01.384 "method": "bdev_nvme_attach_controller" 00:41:01.384 },{ 00:41:01.384 "params": { 00:41:01.384 "name": "Nvme2", 00:41:01.384 "trtype": "tcp", 00:41:01.384 "traddr": "10.0.0.2", 00:41:01.384 "adrfam": "ipv4", 00:41:01.384 "trsvcid": "4420", 00:41:01.384 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:41:01.384 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:41:01.384 "hdgst": false, 00:41:01.384 "ddgst": false 00:41:01.384 }, 00:41:01.384 "method": "bdev_nvme_attach_controller" 00:41:01.384 }' 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:01.384 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:01.643 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:01.643 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:01.643 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:01.643 15:36:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:01.903 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:01.903 ... 00:41:01.903 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:01.903 ... 00:41:01.903 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:01.903 ... 00:41:01.903 fio-3.35 00:41:01.903 Starting 24 threads 00:41:14.124 00:41:14.124 filename0: (groupid=0, jobs=1): err= 0: pid=3390942: Mon Oct 28 15:36:59 2024 00:41:14.124 read: IOPS=69, BW=279KiB/s (286kB/s)(2840KiB/10175msec) 00:41:14.124 slat (nsec): min=6163, max=99295, avg=19485.82, stdev=18483.96 00:41:14.124 clat (msec): min=10, max=355, avg=228.30, stdev=66.59 00:41:14.124 lat (msec): min=10, max=355, avg=228.32, stdev=66.59 00:41:14.124 clat percentiles (msec): 00:41:14.124 | 1.00th=[ 11], 5.00th=[ 36], 10.00th=[ 174], 20.00th=[ 218], 00:41:14.124 | 30.00th=[ 232], 40.00th=[ 236], 50.00th=[ 243], 60.00th=[ 249], 00:41:14.124 | 70.00th=[ 257], 80.00th=[ 271], 90.00th=[ 284], 95.00th=[ 292], 00:41:14.124 | 99.00th=[ 313], 99.50th=[ 355], 99.90th=[ 355], 99.95th=[ 355], 00:41:14.124 | 99.99th=[ 355] 00:41:14.124 bw ( KiB/s): min= 128, max= 640, per=5.04%, avg=277.60, stdev=102.81, samples=20 00:41:14.124 iops : min= 32, max= 160, avg=69.40, stdev=25.70, samples=20 00:41:14.124 lat (msec) : 20=4.51%, 50=2.25%, 250=55.63%, 500=37.61% 00:41:14.124 cpu : usr=98.42%, sys=1.08%, ctx=28, majf=0, minf=70 00:41:14.124 IO depths : 1=2.8%, 2=5.8%, 4=15.1%, 8=66.6%, 16=9.7%, 32=0.0%, >=64=0.0% 00:41:14.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.124 complete : 0=0.0%, 4=91.2%, 8=3.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.124 issued rwts: total=710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.124 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.124 filename0: (groupid=0, jobs=1): err= 0: pid=3390943: Mon Oct 28 15:36:59 2024 00:41:14.124 read: IOPS=70, BW=280KiB/s (287kB/s)(2848KiB/10160msec) 00:41:14.124 slat (usec): min=7, max=108, avg=14.13, stdev=14.06 00:41:14.124 clat (msec): min=121, max=424, avg=227.30, stdev=53.49 00:41:14.124 lat (msec): min=121, max=424, avg=227.32, stdev=53.49 00:41:14.124 clat percentiles (msec): 00:41:14.124 | 1.00th=[ 122], 5.00th=[ 124], 10.00th=[ 148], 20.00th=[ 182], 00:41:14.124 | 30.00th=[ 220], 40.00th=[ 228], 50.00th=[ 236], 60.00th=[ 241], 00:41:14.124 | 70.00th=[ 249], 80.00th=[ 266], 90.00th=[ 284], 95.00th=[ 292], 00:41:14.124 | 99.00th=[ 380], 99.50th=[ 426], 99.90th=[ 426], 99.95th=[ 426], 00:41:14.124 | 99.99th=[ 426] 00:41:14.124 bw ( KiB/s): min= 176, max= 432, per=5.06%, avg=278.40, stdev=60.40, samples=20 00:41:14.124 iops : min= 44, max= 108, avg=69.60, stdev=15.10, samples=20 00:41:14.124 lat (msec) : 250=70.79%, 500=29.21% 00:41:14.124 cpu : usr=98.61%, sys=0.96%, ctx=14, majf=0, minf=35 00:41:14.124 IO depths : 1=0.4%, 2=1.4%, 4=9.0%, 8=77.0%, 16=12.2%, 32=0.0%, >=64=0.0% 00:41:14.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.124 complete : 0=0.0%, 4=89.5%, 8=5.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.124 issued rwts: total=712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.124 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.124 filename0: (groupid=0, jobs=1): err= 0: pid=3390944: Mon Oct 28 15:36:59 2024 00:41:14.124 read: IOPS=47, BW=190KiB/s (195kB/s)(1920KiB/10101msec) 00:41:14.124 slat (nsec): min=8382, max=79261, avg=22466.00, stdev=16464.24 00:41:14.124 clat (msec): min=121, max=574, avg=336.49, stdev=79.95 00:41:14.124 lat (msec): min=121, max=574, avg=336.52, stdev=79.94 00:41:14.124 clat percentiles (msec): 00:41:14.124 | 1.00th=[ 157], 5.00th=[ 197], 10.00th=[ 218], 20.00th=[ 271], 00:41:14.124 | 30.00th=[ 305], 40.00th=[ 330], 50.00th=[ 351], 60.00th=[ 363], 00:41:14.124 | 70.00th=[ 376], 80.00th=[ 384], 90.00th=[ 426], 95.00th=[ 460], 00:41:14.124 | 99.00th=[ 523], 99.50th=[ 567], 99.90th=[ 575], 99.95th=[ 575], 00:41:14.124 | 99.99th=[ 575] 00:41:14.124 bw ( KiB/s): min= 112, max= 384, per=3.37%, avg=185.60, stdev=77.59, samples=20 00:41:14.124 iops : min= 28, max= 96, avg=46.40, stdev=19.40, samples=20 00:41:14.124 lat (msec) : 250=17.92%, 500=80.42%, 750=1.67% 00:41:14.124 cpu : usr=98.60%, sys=0.94%, ctx=41, majf=0, minf=33 00:41:14.124 IO depths : 1=3.1%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:41:14.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.124 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.124 issued rwts: total=480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.124 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.124 filename0: (groupid=0, jobs=1): err= 0: pid=3390945: Mon Oct 28 15:36:59 2024 00:41:14.124 read: IOPS=51, BW=208KiB/s (213kB/s)(2112KiB/10160msec) 00:41:14.124 slat (usec): min=8, max=113, avg=47.10, stdev=29.20 00:41:14.124 clat (msec): min=130, max=504, avg=307.11, stdev=80.36 00:41:14.124 lat (msec): min=130, max=504, avg=307.16, stdev=80.38 00:41:14.124 clat percentiles (msec): 00:41:14.124 | 1.00th=[ 131], 5.00th=[ 163], 10.00th=[ 192], 20.00th=[ 232], 00:41:14.124 | 30.00th=[ 259], 40.00th=[ 292], 50.00th=[ 330], 60.00th=[ 347], 00:41:14.124 | 70.00th=[ 363], 80.00th=[ 372], 90.00th=[ 388], 95.00th=[ 426], 00:41:14.124 | 99.00th=[ 502], 99.50th=[ 502], 99.90th=[ 506], 99.95th=[ 506], 00:41:14.124 | 99.99th=[ 506] 00:41:14.124 bw ( KiB/s): min= 128, max= 384, per=3.71%, avg=204.80, stdev=74.07, samples=20 00:41:14.124 iops : min= 32, max= 96, avg=51.20, stdev=18.52, samples=20 00:41:14.124 lat (msec) : 250=26.14%, 500=72.73%, 750=1.14% 00:41:14.124 cpu : usr=98.24%, sys=1.27%, ctx=14, majf=0, minf=40 00:41:14.124 IO depths : 1=3.4%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.1%, 32=0.0%, >=64=0.0% 00:41:14.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.124 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.124 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.124 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.124 filename0: (groupid=0, jobs=1): err= 0: pid=3390946: Mon Oct 28 15:36:59 2024 00:41:14.124 read: IOPS=52, BW=208KiB/s (213kB/s)(2112KiB/10135msec) 00:41:14.124 slat (usec): min=9, max=110, avg=39.11, stdev=23.67 00:41:14.124 clat (msec): min=184, max=493, avg=306.77, stdev=66.57 00:41:14.124 lat (msec): min=184, max=493, avg=306.81, stdev=66.57 00:41:14.124 clat percentiles (msec): 00:41:14.124 | 1.00th=[ 186], 5.00th=[ 201], 10.00th=[ 203], 20.00th=[ 245], 00:41:14.124 | 30.00th=[ 253], 40.00th=[ 288], 50.00th=[ 338], 60.00th=[ 351], 00:41:14.124 | 70.00th=[ 363], 80.00th=[ 372], 90.00th=[ 380], 95.00th=[ 384], 00:41:14.124 | 99.00th=[ 401], 99.50th=[ 409], 99.90th=[ 493], 99.95th=[ 493], 00:41:14.124 | 99.99th=[ 493] 00:41:14.124 bw ( KiB/s): min= 128, max= 256, per=3.71%, avg=204.80, stdev=61.33, samples=20 00:41:14.124 iops : min= 32, max= 64, avg=51.20, stdev=15.33, samples=20 00:41:14.124 lat (msec) : 250=22.35%, 500=77.65% 00:41:14.124 cpu : usr=98.52%, sys=1.04%, ctx=16, majf=0, minf=45 00:41:14.124 IO depths : 1=4.0%, 2=10.2%, 4=25.0%, 8=52.3%, 16=8.5%, 32=0.0%, >=64=0.0% 00:41:14.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.124 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.124 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.124 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.124 filename0: (groupid=0, jobs=1): err= 0: pid=3390947: Mon Oct 28 15:36:59 2024 00:41:14.124 read: IOPS=63, BW=252KiB/s (258kB/s)(2560KiB/10150msec) 00:41:14.124 slat (usec): min=8, max=103, avg=21.51, stdev=20.28 00:41:14.124 clat (msec): min=138, max=393, avg=252.73, stdev=48.19 00:41:14.124 lat (msec): min=138, max=393, avg=252.75, stdev=48.19 00:41:14.124 clat percentiles (msec): 00:41:14.124 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 201], 20.00th=[ 218], 00:41:14.124 | 30.00th=[ 226], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 255], 00:41:14.124 | 70.00th=[ 271], 80.00th=[ 284], 90.00th=[ 317], 95.00th=[ 351], 00:41:14.124 | 99.00th=[ 388], 99.50th=[ 393], 99.90th=[ 393], 99.95th=[ 393], 00:41:14.124 | 99.99th=[ 393] 00:41:14.124 bw ( KiB/s): min= 144, max= 384, per=4.53%, avg=249.60, stdev=55.04, samples=20 00:41:14.124 iops : min= 36, max= 96, avg=62.40, stdev=13.76, samples=20 00:41:14.124 lat (msec) : 250=56.88%, 500=43.12% 00:41:14.124 cpu : usr=98.49%, sys=1.03%, ctx=20, majf=0, minf=35 00:41:14.124 IO depths : 1=0.9%, 2=3.0%, 4=11.9%, 8=72.3%, 16=11.9%, 32=0.0%, >=64=0.0% 00:41:14.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.124 complete : 0=0.0%, 4=90.2%, 8=4.6%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.124 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.124 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.124 filename0: (groupid=0, jobs=1): err= 0: pid=3390948: Mon Oct 28 15:36:59 2024 00:41:14.124 read: IOPS=52, BW=208KiB/s (213kB/s)(2112KiB/10134msec) 00:41:14.124 slat (nsec): min=8150, max=87568, avg=30322.13, stdev=17714.22 00:41:14.124 clat (msec): min=123, max=568, avg=306.81, stdev=69.51 00:41:14.124 lat (msec): min=123, max=568, avg=306.84, stdev=69.51 00:41:14.124 clat percentiles (msec): 00:41:14.124 | 1.00th=[ 176], 5.00th=[ 197], 10.00th=[ 218], 20.00th=[ 243], 00:41:14.124 | 30.00th=[ 253], 40.00th=[ 288], 50.00th=[ 305], 60.00th=[ 351], 00:41:14.124 | 70.00th=[ 355], 80.00th=[ 368], 90.00th=[ 384], 95.00th=[ 401], 00:41:14.124 | 99.00th=[ 456], 99.50th=[ 527], 99.90th=[ 567], 99.95th=[ 567], 00:41:14.124 | 99.99th=[ 567] 00:41:14.124 bw ( KiB/s): min= 128, max= 384, per=3.71%, avg=204.80, stdev=72.79, samples=20 00:41:14.124 iops : min= 32, max= 96, avg=51.20, stdev=18.20, samples=20 00:41:14.124 lat (msec) : 250=27.65%, 500=71.59%, 750=0.76% 00:41:14.124 cpu : usr=98.54%, sys=1.00%, ctx=259, majf=0, minf=35 00:41:14.124 IO depths : 1=3.4%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.1%, 32=0.0%, >=64=0.0% 00:41:14.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.124 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.124 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.124 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.124 filename0: (groupid=0, jobs=1): err= 0: pid=3390949: Mon Oct 28 15:36:59 2024 00:41:14.124 read: IOPS=61, BW=246KiB/s (252kB/s)(2496KiB/10160msec) 00:41:14.124 slat (nsec): min=8026, max=66830, avg=20757.77, stdev=14342.13 00:41:14.124 clat (msec): min=138, max=479, avg=259.58, stdev=69.62 00:41:14.124 lat (msec): min=138, max=479, avg=259.60, stdev=69.62 00:41:14.124 clat percentiles (msec): 00:41:14.124 | 1.00th=[ 138], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 194], 00:41:14.124 | 30.00th=[ 207], 40.00th=[ 245], 50.00th=[ 257], 60.00th=[ 279], 00:41:14.124 | 70.00th=[ 288], 80.00th=[ 338], 90.00th=[ 368], 95.00th=[ 372], 00:41:14.124 | 99.00th=[ 380], 99.50th=[ 456], 99.90th=[ 481], 99.95th=[ 481], 00:41:14.124 | 99.99th=[ 481] 00:41:14.124 bw ( KiB/s): min= 128, max= 384, per=4.42%, avg=243.20, stdev=91.93, samples=20 00:41:14.124 iops : min= 32, max= 96, avg=60.80, stdev=22.98, samples=20 00:41:14.124 lat (msec) : 250=47.12%, 500=52.88% 00:41:14.124 cpu : usr=98.27%, sys=1.17%, ctx=44, majf=0, minf=61 00:41:14.124 IO depths : 1=4.6%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.9%, 32=0.0%, >=64=0.0% 00:41:14.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.125 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.125 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.125 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.125 filename1: (groupid=0, jobs=1): err= 0: pid=3390950: Mon Oct 28 15:36:59 2024 00:41:14.125 read: IOPS=47, BW=189KiB/s (194kB/s)(1920KiB/10134msec) 00:41:14.125 slat (usec): min=8, max=104, avg=28.55, stdev=23.74 00:41:14.125 clat (msec): min=194, max=496, avg=337.53, stdev=72.12 00:41:14.125 lat (msec): min=194, max=496, avg=337.56, stdev=72.11 00:41:14.125 clat percentiles (msec): 00:41:14.125 | 1.00th=[ 194], 5.00th=[ 197], 10.00th=[ 218], 20.00th=[ 284], 00:41:14.125 | 30.00th=[ 313], 40.00th=[ 330], 50.00th=[ 351], 60.00th=[ 359], 00:41:14.125 | 70.00th=[ 372], 80.00th=[ 388], 90.00th=[ 426], 95.00th=[ 460], 00:41:14.125 | 99.00th=[ 477], 99.50th=[ 493], 99.90th=[ 498], 99.95th=[ 498], 00:41:14.125 | 99.99th=[ 498] 00:41:14.125 bw ( KiB/s): min= 128, max= 384, per=3.37%, avg=185.60, stdev=74.94, samples=20 00:41:14.125 iops : min= 32, max= 96, avg=46.40, stdev=18.73, samples=20 00:41:14.125 lat (msec) : 250=17.08%, 500=82.92% 00:41:14.125 cpu : usr=98.37%, sys=1.13%, ctx=40, majf=0, minf=35 00:41:14.125 IO depths : 1=5.4%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.1%, 32=0.0%, >=64=0.0% 00:41:14.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.125 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.125 issued rwts: total=480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.125 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.125 filename1: (groupid=0, jobs=1): err= 0: pid=3390951: Mon Oct 28 15:36:59 2024 00:41:14.125 read: IOPS=65, BW=264KiB/s (270kB/s)(2680KiB/10160msec) 00:41:14.125 slat (usec): min=7, max=106, avg=27.40, stdev=26.96 00:41:14.125 clat (msec): min=130, max=388, avg=240.97, stdev=42.62 00:41:14.125 lat (msec): min=130, max=388, avg=240.99, stdev=42.62 00:41:14.125 clat percentiles (msec): 00:41:14.125 | 1.00th=[ 131], 5.00th=[ 167], 10.00th=[ 186], 20.00th=[ 203], 00:41:14.125 | 30.00th=[ 224], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 251], 00:41:14.125 | 70.00th=[ 259], 80.00th=[ 271], 90.00th=[ 284], 95.00th=[ 292], 00:41:14.125 | 99.00th=[ 384], 99.50th=[ 388], 99.90th=[ 388], 99.95th=[ 388], 00:41:14.125 | 99.99th=[ 388] 00:41:14.125 bw ( KiB/s): min= 128, max= 384, per=4.75%, avg=261.60, stdev=62.35, samples=20 00:41:14.125 iops : min= 32, max= 96, avg=65.40, stdev=15.59, samples=20 00:41:14.125 lat (msec) : 250=58.51%, 500=41.49% 00:41:14.125 cpu : usr=98.21%, sys=1.32%, ctx=21, majf=0, minf=38 00:41:14.125 IO depths : 1=3.1%, 2=6.9%, 4=17.2%, 8=63.3%, 16=9.6%, 32=0.0%, >=64=0.0% 00:41:14.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.125 complete : 0=0.0%, 4=91.7%, 8=2.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.125 issued rwts: total=670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.125 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.125 filename1: (groupid=0, jobs=1): err= 0: pid=3390952: Mon Oct 28 15:36:59 2024 00:41:14.125 read: IOPS=47, BW=189KiB/s (193kB/s)(1912KiB/10135msec) 00:41:14.125 slat (nsec): min=8131, max=96518, avg=22570.78, stdev=15511.89 00:41:14.125 clat (msec): min=137, max=564, avg=338.89, stdev=78.08 00:41:14.125 lat (msec): min=137, max=564, avg=338.92, stdev=78.08 00:41:14.125 clat percentiles (msec): 00:41:14.125 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 201], 20.00th=[ 264], 00:41:14.125 | 30.00th=[ 330], 40.00th=[ 351], 50.00th=[ 355], 60.00th=[ 363], 00:41:14.125 | 70.00th=[ 384], 80.00th=[ 393], 90.00th=[ 422], 95.00th=[ 426], 00:41:14.125 | 99.00th=[ 531], 99.50th=[ 558], 99.90th=[ 567], 99.95th=[ 567], 00:41:14.125 | 99.99th=[ 567] 00:41:14.125 bw ( KiB/s): min= 128, max= 384, per=3.35%, avg=184.80, stdev=74.23, samples=20 00:41:14.125 iops : min= 32, max= 96, avg=46.20, stdev=18.56, samples=20 00:41:14.125 lat (msec) : 250=18.41%, 500=79.92%, 750=1.67% 00:41:14.125 cpu : usr=98.55%, sys=0.95%, ctx=65, majf=0, minf=32 00:41:14.125 IO depths : 1=3.3%, 2=9.6%, 4=25.1%, 8=52.9%, 16=9.0%, 32=0.0%, >=64=0.0% 00:41:14.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.125 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.125 issued rwts: total=478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.125 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.125 filename1: (groupid=0, jobs=1): err= 0: pid=3390953: Mon Oct 28 15:36:59 2024 00:41:14.125 read: IOPS=48, BW=196KiB/s (200kB/s)(1984KiB/10135msec) 00:41:14.125 slat (usec): min=9, max=105, avg=58.42, stdev=26.53 00:41:14.125 clat (msec): min=121, max=493, avg=326.37, stdev=68.12 00:41:14.125 lat (msec): min=121, max=493, avg=326.43, stdev=68.13 00:41:14.125 clat percentiles (msec): 00:41:14.125 | 1.00th=[ 194], 5.00th=[ 197], 10.00th=[ 218], 20.00th=[ 284], 00:41:14.125 | 30.00th=[ 292], 40.00th=[ 321], 50.00th=[ 342], 60.00th=[ 351], 00:41:14.125 | 70.00th=[ 363], 80.00th=[ 376], 90.00th=[ 401], 95.00th=[ 426], 00:41:14.125 | 99.00th=[ 489], 99.50th=[ 493], 99.90th=[ 493], 99.95th=[ 493], 00:41:14.125 | 99.99th=[ 493] 00:41:14.125 bw ( KiB/s): min= 128, max= 384, per=3.49%, avg=192.00, stdev=76.47, samples=20 00:41:14.125 iops : min= 32, max= 96, avg=48.00, stdev=19.12, samples=20 00:41:14.125 lat (msec) : 250=16.94%, 500=83.06% 00:41:14.125 cpu : usr=98.50%, sys=1.07%, ctx=7, majf=0, minf=29 00:41:14.125 IO depths : 1=5.0%, 2=11.3%, 4=25.0%, 8=51.2%, 16=7.5%, 32=0.0%, >=64=0.0% 00:41:14.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.125 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.125 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.125 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.125 filename1: (groupid=0, jobs=1): err= 0: pid=3390954: Mon Oct 28 15:36:59 2024 00:41:14.125 read: IOPS=70, BW=284KiB/s (291kB/s)(2888KiB/10171msec) 00:41:14.125 slat (usec): min=7, max=152, avg=18.92, stdev=20.91 00:41:14.125 clat (msec): min=9, max=393, avg=223.84, stdev=66.41 00:41:14.125 lat (msec): min=9, max=393, avg=223.86, stdev=66.40 00:41:14.125 clat percentiles (msec): 00:41:14.125 | 1.00th=[ 10], 5.00th=[ 33], 10.00th=[ 174], 20.00th=[ 201], 00:41:14.125 | 30.00th=[ 218], 40.00th=[ 234], 50.00th=[ 241], 60.00th=[ 247], 00:41:14.125 | 70.00th=[ 255], 80.00th=[ 268], 90.00th=[ 284], 95.00th=[ 288], 00:41:14.125 | 99.00th=[ 347], 99.50th=[ 393], 99.90th=[ 393], 99.95th=[ 393], 00:41:14.125 | 99.99th=[ 393] 00:41:14.125 bw ( KiB/s): min= 176, max= 641, per=5.13%, avg=282.45, stdev=97.35, samples=20 00:41:14.125 iops : min= 44, max= 160, avg=70.60, stdev=24.29, samples=20 00:41:14.125 lat (msec) : 10=2.22%, 20=2.22%, 50=2.22%, 250=58.73%, 500=34.63% 00:41:14.125 cpu : usr=98.45%, sys=1.12%, ctx=20, majf=0, minf=30 00:41:14.125 IO depths : 1=1.5%, 2=5.1%, 4=16.9%, 8=65.4%, 16=11.1%, 32=0.0%, >=64=0.0% 00:41:14.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.125 complete : 0=0.0%, 4=91.8%, 8=2.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.125 issued rwts: total=722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.125 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.125 filename1: (groupid=0, jobs=1): err= 0: pid=3390955: Mon Oct 28 15:36:59 2024 00:41:14.125 read: IOPS=53, BW=215KiB/s (220kB/s)(2176KiB/10120msec) 00:41:14.125 slat (nsec): min=4639, max=61276, avg=24982.49, stdev=11085.25 00:41:14.125 clat (msec): min=178, max=400, avg=297.43, stdev=64.20 00:41:14.125 lat (msec): min=178, max=400, avg=297.45, stdev=64.20 00:41:14.125 clat percentiles (msec): 00:41:14.125 | 1.00th=[ 180], 5.00th=[ 194], 10.00th=[ 201], 20.00th=[ 232], 00:41:14.125 | 30.00th=[ 253], 40.00th=[ 275], 50.00th=[ 288], 60.00th=[ 338], 00:41:14.125 | 70.00th=[ 351], 80.00th=[ 368], 90.00th=[ 380], 95.00th=[ 384], 00:41:14.125 | 99.00th=[ 401], 99.50th=[ 401], 99.90th=[ 401], 99.95th=[ 401], 00:41:14.125 | 99.99th=[ 401] 00:41:14.125 bw ( KiB/s): min= 128, max= 368, per=3.84%, avg=211.20, stdev=71.10, samples=20 00:41:14.125 iops : min= 32, max= 92, avg=52.80, stdev=17.78, samples=20 00:41:14.125 lat (msec) : 250=26.10%, 500=73.90% 00:41:14.125 cpu : usr=98.29%, sys=1.18%, ctx=35, majf=0, minf=44 00:41:14.125 IO depths : 1=3.5%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.0%, 32=0.0%, >=64=0.0% 00:41:14.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.125 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.125 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.125 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.125 filename1: (groupid=0, jobs=1): err= 0: pid=3390956: Mon Oct 28 15:36:59 2024 00:41:14.125 read: IOPS=63, BW=254KiB/s (260kB/s)(2584KiB/10160msec) 00:41:14.125 slat (usec): min=6, max=116, avg=25.40, stdev=25.63 00:41:14.125 clat (msec): min=124, max=497, avg=250.58, stdev=50.72 00:41:14.125 lat (msec): min=124, max=497, avg=250.60, stdev=50.72 00:41:14.125 clat percentiles (msec): 00:41:14.125 | 1.00th=[ 159], 5.00th=[ 174], 10.00th=[ 197], 20.00th=[ 220], 00:41:14.125 | 30.00th=[ 232], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 253], 00:41:14.125 | 70.00th=[ 266], 80.00th=[ 279], 90.00th=[ 292], 95.00th=[ 321], 00:41:14.125 | 99.00th=[ 460], 99.50th=[ 460], 99.90th=[ 498], 99.95th=[ 498], 00:41:14.125 | 99.99th=[ 498] 00:41:14.125 bw ( KiB/s): min= 128, max= 368, per=4.58%, avg=252.00, stdev=64.81, samples=20 00:41:14.125 iops : min= 32, max= 92, avg=63.00, stdev=16.20, samples=20 00:41:14.125 lat (msec) : 250=56.97%, 500=43.03% 00:41:14.125 cpu : usr=98.38%, sys=1.19%, ctx=11, majf=0, minf=46 00:41:14.125 IO depths : 1=1.1%, 2=3.7%, 4=14.1%, 8=69.7%, 16=11.5%, 32=0.0%, >=64=0.0% 00:41:14.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.125 complete : 0=0.0%, 4=91.0%, 8=3.5%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.125 issued rwts: total=646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.125 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.125 filename1: (groupid=0, jobs=1): err= 0: pid=3390957: Mon Oct 28 15:36:59 2024 00:41:14.125 read: IOPS=53, BW=214KiB/s (219kB/s)(2176KiB/10157msec) 00:41:14.125 slat (usec): min=5, max=117, avg=52.96, stdev=33.56 00:41:14.125 clat (msec): min=177, max=498, avg=298.32, stdev=71.29 00:41:14.125 lat (msec): min=177, max=499, avg=298.37, stdev=71.31 00:41:14.125 clat percentiles (msec): 00:41:14.125 | 1.00th=[ 178], 5.00th=[ 190], 10.00th=[ 197], 20.00th=[ 220], 00:41:14.125 | 30.00th=[ 249], 40.00th=[ 288], 50.00th=[ 292], 60.00th=[ 338], 00:41:14.125 | 70.00th=[ 351], 80.00th=[ 355], 90.00th=[ 388], 95.00th=[ 422], 00:41:14.125 | 99.00th=[ 426], 99.50th=[ 426], 99.90th=[ 498], 99.95th=[ 498], 00:41:14.125 | 99.99th=[ 498] 00:41:14.125 bw ( KiB/s): min= 128, max= 384, per=3.84%, avg=211.20, stdev=81.19, samples=20 00:41:14.125 iops : min= 32, max= 96, avg=52.80, stdev=20.30, samples=20 00:41:14.125 lat (msec) : 250=30.15%, 500=69.85% 00:41:14.125 cpu : usr=98.45%, sys=1.09%, ctx=19, majf=0, minf=31 00:41:14.125 IO depths : 1=2.4%, 2=8.6%, 4=25.0%, 8=53.9%, 16=10.1%, 32=0.0%, >=64=0.0% 00:41:14.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.125 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.125 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.125 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.126 filename2: (groupid=0, jobs=1): err= 0: pid=3390958: Mon Oct 28 15:36:59 2024 00:41:14.126 read: IOPS=52, BW=208KiB/s (213kB/s)(2112KiB/10138msec) 00:41:14.126 slat (nsec): min=4506, max=88995, avg=33667.64, stdev=18901.33 00:41:14.126 clat (msec): min=184, max=497, avg=306.90, stdev=64.02 00:41:14.126 lat (msec): min=184, max=497, avg=306.94, stdev=64.01 00:41:14.126 clat percentiles (msec): 00:41:14.126 | 1.00th=[ 186], 5.00th=[ 201], 10.00th=[ 220], 20.00th=[ 247], 00:41:14.126 | 30.00th=[ 253], 40.00th=[ 288], 50.00th=[ 330], 60.00th=[ 342], 00:41:14.126 | 70.00th=[ 351], 80.00th=[ 368], 90.00th=[ 384], 95.00th=[ 384], 00:41:14.126 | 99.00th=[ 409], 99.50th=[ 493], 99.90th=[ 498], 99.95th=[ 498], 00:41:14.126 | 99.99th=[ 498] 00:41:14.126 bw ( KiB/s): min= 128, max= 256, per=3.71%, avg=204.80, stdev=61.33, samples=20 00:41:14.126 iops : min= 32, max= 64, avg=51.20, stdev=15.33, samples=20 00:41:14.126 lat (msec) : 250=21.59%, 500=78.41% 00:41:14.126 cpu : usr=98.41%, sys=1.15%, ctx=18, majf=0, minf=29 00:41:14.126 IO depths : 1=4.7%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.8%, 32=0.0%, >=64=0.0% 00:41:14.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.126 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.126 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.126 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.126 filename2: (groupid=0, jobs=1): err= 0: pid=3390959: Mon Oct 28 15:36:59 2024 00:41:14.126 read: IOPS=56, BW=227KiB/s (232kB/s)(2304KiB/10160msec) 00:41:14.126 slat (nsec): min=8077, max=52144, avg=22840.76, stdev=9423.69 00:41:14.126 clat (msec): min=130, max=454, avg=281.21, stdev=67.50 00:41:14.126 lat (msec): min=130, max=454, avg=281.24, stdev=67.50 00:41:14.126 clat percentiles (msec): 00:41:14.126 | 1.00th=[ 131], 5.00th=[ 163], 10.00th=[ 197], 20.00th=[ 220], 00:41:14.126 | 30.00th=[ 247], 40.00th=[ 259], 50.00th=[ 279], 60.00th=[ 288], 00:41:14.126 | 70.00th=[ 338], 80.00th=[ 355], 90.00th=[ 368], 95.00th=[ 372], 00:41:14.126 | 99.00th=[ 388], 99.50th=[ 439], 99.90th=[ 456], 99.95th=[ 456], 00:41:14.126 | 99.99th=[ 456] 00:41:14.126 bw ( KiB/s): min= 128, max= 384, per=4.06%, avg=224.00, stdev=67.68, samples=20 00:41:14.126 iops : min= 32, max= 96, avg=56.00, stdev=16.92, samples=20 00:41:14.126 lat (msec) : 250=33.33%, 500=66.67% 00:41:14.126 cpu : usr=98.03%, sys=1.37%, ctx=15, majf=0, minf=45 00:41:14.126 IO depths : 1=2.4%, 2=8.7%, 4=25.0%, 8=53.8%, 16=10.1%, 32=0.0%, >=64=0.0% 00:41:14.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.126 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.126 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.126 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.126 filename2: (groupid=0, jobs=1): err= 0: pid=3390960: Mon Oct 28 15:36:59 2024 00:41:14.126 read: IOPS=69, BW=277KiB/s (283kB/s)(2816KiB/10172msec) 00:41:14.126 slat (usec): min=4, max=101, avg=19.48, stdev=18.11 00:41:14.126 clat (msec): min=9, max=431, avg=229.47, stdev=75.50 00:41:14.126 lat (msec): min=9, max=431, avg=229.49, stdev=75.49 00:41:14.126 clat percentiles (msec): 00:41:14.126 | 1.00th=[ 11], 5.00th=[ 35], 10.00th=[ 146], 20.00th=[ 197], 00:41:14.126 | 30.00th=[ 220], 40.00th=[ 228], 50.00th=[ 243], 60.00th=[ 247], 00:41:14.126 | 70.00th=[ 264], 80.00th=[ 275], 90.00th=[ 292], 95.00th=[ 342], 00:41:14.126 | 99.00th=[ 393], 99.50th=[ 430], 99.90th=[ 430], 99.95th=[ 430], 00:41:14.126 | 99.99th=[ 430] 00:41:14.126 bw ( KiB/s): min= 176, max= 640, per=5.00%, avg=275.20, stdev=96.64, samples=20 00:41:14.126 iops : min= 44, max= 160, avg=68.80, stdev=24.16, samples=20 00:41:14.126 lat (msec) : 10=0.99%, 20=3.27%, 50=2.56%, 250=58.81%, 500=34.38% 00:41:14.126 cpu : usr=98.35%, sys=1.23%, ctx=11, majf=0, minf=51 00:41:14.126 IO depths : 1=0.3%, 2=3.0%, 4=13.8%, 8=70.5%, 16=12.5%, 32=0.0%, >=64=0.0% 00:41:14.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.126 complete : 0=0.0%, 4=90.9%, 8=3.9%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.126 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.126 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.126 filename2: (groupid=0, jobs=1): err= 0: pid=3390961: Mon Oct 28 15:36:59 2024 00:41:14.126 read: IOPS=50, BW=202KiB/s (207kB/s)(2048KiB/10141msec) 00:41:14.126 slat (usec): min=8, max=112, avg=35.22, stdev=28.45 00:41:14.126 clat (msec): min=146, max=560, avg=316.59, stdev=79.34 00:41:14.126 lat (msec): min=146, max=560, avg=316.62, stdev=79.34 00:41:14.126 clat percentiles (msec): 00:41:14.126 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 201], 20.00th=[ 232], 00:41:14.126 | 30.00th=[ 262], 40.00th=[ 296], 50.00th=[ 342], 60.00th=[ 355], 00:41:14.126 | 70.00th=[ 368], 80.00th=[ 376], 90.00th=[ 401], 95.00th=[ 426], 00:41:14.126 | 99.00th=[ 502], 99.50th=[ 535], 99.90th=[ 558], 99.95th=[ 558], 00:41:14.126 | 99.99th=[ 558] 00:41:14.126 bw ( KiB/s): min= 128, max= 256, per=3.60%, avg=198.40, stdev=62.38, samples=20 00:41:14.126 iops : min= 32, max= 64, avg=49.60, stdev=15.59, samples=20 00:41:14.126 lat (msec) : 250=25.39%, 500=73.44%, 750=1.17% 00:41:14.126 cpu : usr=98.45%, sys=1.09%, ctx=12, majf=0, minf=44 00:41:14.126 IO depths : 1=3.1%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:41:14.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.126 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.126 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.126 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.126 filename2: (groupid=0, jobs=1): err= 0: pid=3390962: Mon Oct 28 15:36:59 2024 00:41:14.126 read: IOPS=54, BW=219KiB/s (224kB/s)(2224KiB/10160msec) 00:41:14.126 slat (usec): min=8, max=111, avg=43.25, stdev=32.25 00:41:14.126 clat (msec): min=130, max=427, avg=290.46, stdev=76.56 00:41:14.126 lat (msec): min=130, max=427, avg=290.50, stdev=76.58 00:41:14.126 clat percentiles (msec): 00:41:14.126 | 1.00th=[ 131], 5.00th=[ 163], 10.00th=[ 197], 20.00th=[ 207], 00:41:14.126 | 30.00th=[ 245], 40.00th=[ 259], 50.00th=[ 288], 60.00th=[ 338], 00:41:14.126 | 70.00th=[ 351], 80.00th=[ 355], 90.00th=[ 363], 95.00th=[ 422], 00:41:14.126 | 99.00th=[ 426], 99.50th=[ 426], 99.90th=[ 426], 99.95th=[ 426], 00:41:14.126 | 99.99th=[ 426] 00:41:14.126 bw ( KiB/s): min= 128, max= 384, per=3.91%, avg=216.00, stdev=72.58, samples=20 00:41:14.126 iops : min= 32, max= 96, avg=54.00, stdev=18.15, samples=20 00:41:14.126 lat (msec) : 250=33.81%, 500=66.19% 00:41:14.126 cpu : usr=98.53%, sys=0.99%, ctx=14, majf=0, minf=49 00:41:14.126 IO depths : 1=5.0%, 2=10.6%, 4=22.8%, 8=54.0%, 16=7.6%, 32=0.0%, >=64=0.0% 00:41:14.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.126 complete : 0=0.0%, 4=93.3%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.126 issued rwts: total=556,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.126 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.126 filename2: (groupid=0, jobs=1): err= 0: pid=3390963: Mon Oct 28 15:36:59 2024 00:41:14.126 read: IOPS=55, BW=221KiB/s (226kB/s)(2240KiB/10148msec) 00:41:14.126 slat (nsec): min=8968, max=95501, avg=29041.31, stdev=20866.78 00:41:14.126 clat (msec): min=122, max=557, avg=288.81, stdev=66.86 00:41:14.126 lat (msec): min=122, max=557, avg=288.84, stdev=66.86 00:41:14.126 clat percentiles (msec): 00:41:14.126 | 1.00th=[ 174], 5.00th=[ 194], 10.00th=[ 201], 20.00th=[ 220], 00:41:14.126 | 30.00th=[ 249], 40.00th=[ 264], 50.00th=[ 284], 60.00th=[ 292], 00:41:14.126 | 70.00th=[ 338], 80.00th=[ 355], 90.00th=[ 372], 95.00th=[ 380], 00:41:14.126 | 99.00th=[ 464], 99.50th=[ 481], 99.90th=[ 558], 99.95th=[ 558], 00:41:14.126 | 99.99th=[ 558] 00:41:14.126 bw ( KiB/s): min= 128, max= 384, per=3.95%, avg=217.60, stdev=80.49, samples=20 00:41:14.126 iops : min= 32, max= 96, avg=54.40, stdev=20.12, samples=20 00:41:14.126 lat (msec) : 250=30.00%, 500=69.64%, 750=0.36% 00:41:14.126 cpu : usr=98.28%, sys=1.31%, ctx=13, majf=0, minf=46 00:41:14.126 IO depths : 1=2.5%, 2=8.0%, 4=22.9%, 8=56.6%, 16=10.0%, 32=0.0%, >=64=0.0% 00:41:14.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.126 complete : 0=0.0%, 4=93.5%, 8=0.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.126 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.126 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.126 filename2: (groupid=0, jobs=1): err= 0: pid=3390964: Mon Oct 28 15:36:59 2024 00:41:14.126 read: IOPS=52, BW=208KiB/s (213kB/s)(2112KiB/10141msec) 00:41:14.126 slat (nsec): min=4516, max=98412, avg=34373.94, stdev=20481.06 00:41:14.126 clat (msec): min=159, max=498, avg=307.01, stdev=64.18 00:41:14.126 lat (msec): min=159, max=498, avg=307.05, stdev=64.18 00:41:14.126 clat percentiles (msec): 00:41:14.126 | 1.00th=[ 186], 5.00th=[ 201], 10.00th=[ 220], 20.00th=[ 245], 00:41:14.126 | 30.00th=[ 253], 40.00th=[ 292], 50.00th=[ 330], 60.00th=[ 342], 00:41:14.126 | 70.00th=[ 355], 80.00th=[ 368], 90.00th=[ 384], 95.00th=[ 384], 00:41:14.126 | 99.00th=[ 405], 99.50th=[ 481], 99.90th=[ 498], 99.95th=[ 498], 00:41:14.126 | 99.99th=[ 498] 00:41:14.126 bw ( KiB/s): min= 128, max= 272, per=3.71%, avg=204.80, stdev=61.55, samples=20 00:41:14.126 iops : min= 32, max= 68, avg=51.20, stdev=15.39, samples=20 00:41:14.126 lat (msec) : 250=22.73%, 500=77.27% 00:41:14.126 cpu : usr=98.33%, sys=1.17%, ctx=62, majf=0, minf=31 00:41:14.126 IO depths : 1=3.4%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.1%, 32=0.0%, >=64=0.0% 00:41:14.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.126 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.126 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.126 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.126 filename2: (groupid=0, jobs=1): err= 0: pid=3390965: Mon Oct 28 15:36:59 2024 00:41:14.126 read: IOPS=67, BW=271KiB/s (277kB/s)(2752KiB/10160msec) 00:41:14.126 slat (nsec): min=7988, max=91545, avg=16215.38, stdev=14829.54 00:41:14.126 clat (msec): min=130, max=396, avg=234.63, stdev=53.64 00:41:14.126 lat (msec): min=130, max=396, avg=234.65, stdev=53.63 00:41:14.126 clat percentiles (msec): 00:41:14.126 | 1.00th=[ 131], 5.00th=[ 140], 10.00th=[ 163], 20.00th=[ 186], 00:41:14.126 | 30.00th=[ 220], 40.00th=[ 232], 50.00th=[ 241], 60.00th=[ 245], 00:41:14.126 | 70.00th=[ 259], 80.00th=[ 271], 90.00th=[ 288], 95.00th=[ 334], 00:41:14.126 | 99.00th=[ 393], 99.50th=[ 397], 99.90th=[ 397], 99.95th=[ 397], 00:41:14.126 | 99.99th=[ 397] 00:41:14.126 bw ( KiB/s): min= 176, max= 384, per=4.88%, avg=268.80, stdev=64.13, samples=20 00:41:14.126 iops : min= 44, max= 96, avg=67.20, stdev=16.03, samples=20 00:41:14.126 lat (msec) : 250=62.79%, 500=37.21% 00:41:14.126 cpu : usr=98.10%, sys=1.29%, ctx=91, majf=0, minf=46 00:41:14.126 IO depths : 1=0.9%, 2=2.2%, 4=9.7%, 8=75.3%, 16=11.9%, 32=0.0%, >=64=0.0% 00:41:14.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.126 complete : 0=0.0%, 4=89.6%, 8=5.2%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.126 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.126 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.126 00:41:14.126 Run status group 0 (all jobs): 00:41:14.126 READ: bw=5497KiB/s (5629kB/s), 189KiB/s-284KiB/s (193kB/s-291kB/s), io=54.6MiB (57.3MB), run=10101-10175msec 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.127 bdev_null0 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.127 [2024-10-28 15:37:00.410622] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.127 bdev_null1 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:14.127 { 00:41:14.127 "params": { 00:41:14.127 "name": "Nvme$subsystem", 00:41:14.127 "trtype": "$TEST_TRANSPORT", 00:41:14.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:14.127 "adrfam": "ipv4", 00:41:14.127 "trsvcid": "$NVMF_PORT", 00:41:14.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:14.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:14.127 "hdgst": ${hdgst:-false}, 00:41:14.127 "ddgst": ${ddgst:-false} 00:41:14.127 }, 00:41:14.127 "method": "bdev_nvme_attach_controller" 00:41:14.127 } 00:41:14.127 EOF 00:41:14.127 )") 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:14.127 { 00:41:14.127 "params": { 00:41:14.127 "name": "Nvme$subsystem", 00:41:14.127 "trtype": "$TEST_TRANSPORT", 00:41:14.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:14.127 "adrfam": "ipv4", 00:41:14.127 "trsvcid": "$NVMF_PORT", 00:41:14.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:14.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:14.127 "hdgst": ${hdgst:-false}, 00:41:14.127 "ddgst": ${ddgst:-false} 00:41:14.127 }, 00:41:14.127 "method": "bdev_nvme_attach_controller" 00:41:14.127 } 00:41:14.127 EOF 00:41:14.127 )") 00:41:14.127 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:14.128 15:37:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:14.128 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:14.128 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:14.128 15:37:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:14.128 15:37:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:14.128 15:37:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:14.128 "params": { 00:41:14.128 "name": "Nvme0", 00:41:14.128 "trtype": "tcp", 00:41:14.128 "traddr": "10.0.0.2", 00:41:14.128 "adrfam": "ipv4", 00:41:14.128 "trsvcid": "4420", 00:41:14.128 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:14.128 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:14.128 "hdgst": false, 00:41:14.128 "ddgst": false 00:41:14.128 }, 00:41:14.128 "method": "bdev_nvme_attach_controller" 00:41:14.128 },{ 00:41:14.128 "params": { 00:41:14.128 "name": "Nvme1", 00:41:14.128 "trtype": "tcp", 00:41:14.128 "traddr": "10.0.0.2", 00:41:14.128 "adrfam": "ipv4", 00:41:14.128 "trsvcid": "4420", 00:41:14.128 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:14.128 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:14.128 "hdgst": false, 00:41:14.128 "ddgst": false 00:41:14.128 }, 00:41:14.128 "method": "bdev_nvme_attach_controller" 00:41:14.128 }' 00:41:14.128 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:14.128 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:14.128 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:14.128 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:14.128 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:14.128 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:14.128 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:14.128 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:14.128 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:14.128 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:14.128 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:14.128 ... 00:41:14.128 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:14.128 ... 00:41:14.128 fio-3.35 00:41:14.128 Starting 4 threads 00:41:20.690 00:41:20.690 filename0: (groupid=0, jobs=1): err= 0: pid=3392475: Mon Oct 28 15:37:07 2024 00:41:20.690 read: IOPS=918, BW=7346KiB/s (7522kB/s)(35.9MiB/5003msec) 00:41:20.690 slat (usec): min=4, max=114, avg=31.39, stdev=15.24 00:41:20.690 clat (usec): min=2051, max=15493, avg=8587.26, stdev=1338.32 00:41:20.690 lat (usec): min=2075, max=15520, avg=8618.65, stdev=1340.40 00:41:20.690 clat percentiles (usec): 00:41:20.690 | 1.00th=[ 4948], 5.00th=[ 6063], 10.00th=[ 6980], 20.00th=[ 7767], 00:41:20.690 | 30.00th=[ 8094], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8979], 00:41:20.690 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[10159], 95.00th=[10421], 00:41:20.690 | 99.00th=[11338], 99.50th=[12125], 99.90th=[15008], 99.95th=[15008], 00:41:20.690 | 99.99th=[15533] 00:41:20.690 bw ( KiB/s): min= 6272, max= 8384, per=25.38%, avg=7338.90, stdev=661.30, samples=10 00:41:20.690 iops : min= 784, max= 1048, avg=917.30, stdev=82.71, samples=10 00:41:20.690 lat (msec) : 4=0.22%, 10=86.33%, 20=13.45% 00:41:20.690 cpu : usr=94.86%, sys=4.28%, ctx=8, majf=0, minf=0 00:41:20.690 IO depths : 1=1.6%, 2=19.8%, 4=54.7%, 8=23.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:20.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.690 complete : 0=0.0%, 4=90.4%, 8=9.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.690 issued rwts: total=4594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:20.690 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:20.690 filename0: (groupid=0, jobs=1): err= 0: pid=3392476: Mon Oct 28 15:37:07 2024 00:41:20.690 read: IOPS=894, BW=7156KiB/s (7327kB/s)(35.0MiB/5002msec) 00:41:20.690 slat (nsec): min=4790, max=81999, avg=23493.87, stdev=14642.13 00:41:20.690 clat (usec): min=1276, max=22398, avg=8849.19, stdev=1892.65 00:41:20.690 lat (usec): min=1290, max=22410, avg=8872.69, stdev=1893.08 00:41:20.690 clat percentiles (usec): 00:41:20.690 | 1.00th=[ 3294], 5.00th=[ 5932], 10.00th=[ 6980], 20.00th=[ 7898], 00:41:20.690 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 9241], 00:41:20.690 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10421], 95.00th=[11076], 00:41:20.690 | 99.00th=[16581], 99.50th=[17433], 99.90th=[17695], 99.95th=[18220], 00:41:20.690 | 99.99th=[22414] 00:41:20.690 bw ( KiB/s): min= 6256, max= 8064, per=24.72%, avg=7146.90, stdev=648.39, samples=10 00:41:20.690 iops : min= 782, max= 1008, avg=893.30, stdev=81.10, samples=10 00:41:20.690 lat (msec) : 2=0.27%, 4=1.21%, 10=80.40%, 20=18.10%, 50=0.02% 00:41:20.690 cpu : usr=94.06%, sys=3.80%, ctx=102, majf=0, minf=9 00:41:20.690 IO depths : 1=0.8%, 2=21.1%, 4=53.4%, 8=24.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:20.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.690 complete : 0=0.0%, 4=90.4%, 8=9.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.690 issued rwts: total=4474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:20.690 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:20.690 filename1: (groupid=0, jobs=1): err= 0: pid=3392477: Mon Oct 28 15:37:07 2024 00:41:20.690 read: IOPS=905, BW=7240KiB/s (7414kB/s)(35.4MiB/5002msec) 00:41:20.690 slat (nsec): min=4657, max=83449, avg=23492.01, stdev=14310.67 00:41:20.690 clat (usec): min=1136, max=16969, avg=8744.76, stdev=1458.87 00:41:20.690 lat (usec): min=1150, max=16983, avg=8768.25, stdev=1458.79 00:41:20.690 clat percentiles (usec): 00:41:20.690 | 1.00th=[ 4883], 5.00th=[ 6325], 10.00th=[ 7111], 20.00th=[ 7898], 00:41:20.690 | 30.00th=[ 8225], 40.00th=[ 8356], 50.00th=[ 8586], 60.00th=[ 9110], 00:41:20.690 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10552], 00:41:20.690 | 99.00th=[12518], 99.50th=[13829], 99.90th=[15795], 99.95th=[16450], 00:41:20.690 | 99.99th=[16909] 00:41:20.690 bw ( KiB/s): min= 6384, max= 8032, per=25.11%, avg=7260.44, stdev=610.51, samples=9 00:41:20.690 iops : min= 798, max= 1004, avg=907.56, stdev=76.31, samples=9 00:41:20.690 lat (msec) : 2=0.11%, 4=0.57%, 10=82.81%, 20=16.50% 00:41:20.690 cpu : usr=96.10%, sys=2.98%, ctx=28, majf=0, minf=9 00:41:20.690 IO depths : 1=1.2%, 2=19.7%, 4=54.5%, 8=24.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:20.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.690 complete : 0=0.0%, 4=90.6%, 8=9.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.690 issued rwts: total=4527,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:20.690 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:20.690 filename1: (groupid=0, jobs=1): err= 0: pid=3392478: Mon Oct 28 15:37:07 2024 00:41:20.690 read: IOPS=897, BW=7178KiB/s (7351kB/s)(35.1MiB/5004msec) 00:41:20.690 slat (nsec): min=4577, max=80250, avg=23867.33, stdev=14594.07 00:41:20.690 clat (usec): min=1296, max=18860, avg=8816.61, stdev=1736.66 00:41:20.690 lat (usec): min=1312, max=18880, avg=8840.48, stdev=1737.09 00:41:20.690 clat percentiles (usec): 00:41:20.690 | 1.00th=[ 3621], 5.00th=[ 6259], 10.00th=[ 7111], 20.00th=[ 7963], 00:41:20.690 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 9110], 00:41:20.690 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10683], 00:41:20.690 | 99.00th=[15270], 99.50th=[16450], 99.90th=[17695], 99.95th=[18744], 00:41:20.690 | 99.99th=[18744] 00:41:20.690 bw ( KiB/s): min= 6256, max= 8192, per=24.80%, avg=7171.20, stdev=614.00, samples=10 00:41:20.690 iops : min= 782, max= 1024, avg=896.40, stdev=76.75, samples=10 00:41:20.690 lat (msec) : 2=0.20%, 4=0.94%, 10=81.60%, 20=17.26% 00:41:20.690 cpu : usr=97.38%, sys=2.12%, ctx=7, majf=0, minf=9 00:41:20.690 IO depths : 1=1.7%, 2=21.4%, 4=53.3%, 8=23.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:20.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.690 complete : 0=0.0%, 4=90.1%, 8=9.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.690 issued rwts: total=4490,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:20.690 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:20.690 00:41:20.690 Run status group 0 (all jobs): 00:41:20.690 READ: bw=28.2MiB/s (29.6MB/s), 7156KiB/s-7346KiB/s (7327kB/s-7522kB/s), io=141MiB (148MB), run=5002-5004msec 00:41:20.690 15:37:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:41:20.690 15:37:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:20.690 15:37:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:20.690 15:37:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:20.690 15:37:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:20.690 15:37:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:20.690 15:37:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.690 15:37:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:20.690 15:37:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.690 15:37:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:20.690 15:37:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.691 15:37:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:20.691 15:37:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.691 15:37:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:20.691 15:37:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:20.691 15:37:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:20.691 15:37:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:20.691 15:37:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.691 15:37:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:20.691 15:37:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.691 15:37:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:20.691 15:37:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.691 15:37:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:20.691 15:37:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.691 00:41:20.691 real 0m25.951s 00:41:20.691 user 4m38.683s 00:41:20.691 sys 0m5.442s 00:41:20.691 15:37:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:20.691 15:37:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:20.691 ************************************ 00:41:20.691 END TEST fio_dif_rand_params 00:41:20.691 ************************************ 00:41:20.691 15:37:07 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:41:20.691 15:37:07 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:20.691 15:37:07 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:20.691 15:37:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:20.691 ************************************ 00:41:20.691 START TEST fio_dif_digest 00:41:20.691 ************************************ 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:20.691 bdev_null0 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:20.691 [2024-10-28 15:37:07.495606] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:20.691 { 00:41:20.691 "params": { 00:41:20.691 "name": "Nvme$subsystem", 00:41:20.691 "trtype": "$TEST_TRANSPORT", 00:41:20.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:20.691 "adrfam": "ipv4", 00:41:20.691 "trsvcid": "$NVMF_PORT", 00:41:20.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:20.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:20.691 "hdgst": ${hdgst:-false}, 00:41:20.691 "ddgst": ${ddgst:-false} 00:41:20.691 }, 00:41:20.691 "method": "bdev_nvme_attach_controller" 00:41:20.691 } 00:41:20.691 EOF 00:41:20.691 )") 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:20.691 "params": { 00:41:20.691 "name": "Nvme0", 00:41:20.691 "trtype": "tcp", 00:41:20.691 "traddr": "10.0.0.2", 00:41:20.691 "adrfam": "ipv4", 00:41:20.691 "trsvcid": "4420", 00:41:20.691 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:20.691 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:20.691 "hdgst": true, 00:41:20.691 "ddgst": true 00:41:20.691 }, 00:41:20.691 "method": "bdev_nvme_attach_controller" 00:41:20.691 }' 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:20.691 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:20.951 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:20.951 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:20.951 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:20.951 15:37:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:21.212 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:21.212 ... 00:41:21.212 fio-3.35 00:41:21.212 Starting 3 threads 00:41:33.429 00:41:33.429 filename0: (groupid=0, jobs=1): err= 0: pid=3393349: Mon Oct 28 15:37:18 2024 00:41:33.429 read: IOPS=84, BW=10.5MiB/s (11.0MB/s)(106MiB/10055msec) 00:41:33.429 slat (nsec): min=5173, max=34773, avg=16066.64, stdev=2289.22 00:41:33.429 clat (usec): min=14107, max=74766, avg=35579.64, stdev=6865.97 00:41:33.429 lat (usec): min=14123, max=74781, avg=35595.71, stdev=6865.83 00:41:33.429 clat percentiles (usec): 00:41:33.429 | 1.00th=[17957], 5.00th=[22938], 10.00th=[25560], 20.00th=[28967], 00:41:33.429 | 30.00th=[32637], 40.00th=[36439], 50.00th=[37487], 60.00th=[38536], 00:41:33.430 | 70.00th=[39584], 80.00th=[40633], 90.00th=[42206], 95.00th=[43779], 00:41:33.430 | 99.00th=[49546], 99.50th=[50070], 99.90th=[74974], 99.95th=[74974], 00:41:33.430 | 99.99th=[74974] 00:41:33.430 bw ( KiB/s): min= 9216, max=14336, per=30.19%, avg=10790.40, stdev=1124.42, samples=20 00:41:33.430 iops : min= 72, max= 112, avg=84.30, stdev= 8.78, samples=20 00:41:33.430 lat (msec) : 20=2.48%, 50=97.04%, 100=0.47% 00:41:33.430 cpu : usr=94.66%, sys=4.82%, ctx=17, majf=0, minf=143 00:41:33.430 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:33.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.430 issued rwts: total=846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.430 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:33.430 filename0: (groupid=0, jobs=1): err= 0: pid=3393350: Mon Oct 28 15:37:18 2024 00:41:33.430 read: IOPS=105, BW=13.2MiB/s (13.8MB/s)(133MiB/10058msec) 00:41:33.430 slat (nsec): min=5931, max=34272, avg=16361.04, stdev=2640.11 00:41:33.430 clat (usec): min=12834, max=73460, avg=28402.41, stdev=6901.46 00:41:33.430 lat (usec): min=12866, max=73476, avg=28418.77, stdev=6901.32 00:41:33.430 clat percentiles (usec): 00:41:33.430 | 1.00th=[15664], 5.00th=[19530], 10.00th=[21890], 20.00th=[24249], 00:41:33.430 | 30.00th=[26084], 40.00th=[27395], 50.00th=[28443], 60.00th=[29230], 00:41:33.430 | 70.00th=[30278], 80.00th=[31327], 90.00th=[32637], 95.00th=[33817], 00:41:33.430 | 99.00th=[68682], 99.50th=[71828], 99.90th=[72877], 99.95th=[73925], 00:41:33.430 | 99.99th=[73925] 00:41:33.430 bw ( KiB/s): min=11520, max=15616, per=37.86%, avg=13529.60, stdev=1229.91, samples=20 00:41:33.430 iops : min= 90, max= 122, avg=105.70, stdev= 9.61, samples=20 00:41:33.430 lat (msec) : 20=5.94%, 50=92.17%, 100=1.89% 00:41:33.430 cpu : usr=93.31%, sys=5.44%, ctx=176, majf=0, minf=148 00:41:33.430 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:33.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.430 issued rwts: total=1060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.430 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:33.430 filename0: (groupid=0, jobs=1): err= 0: pid=3393351: Mon Oct 28 15:37:18 2024 00:41:33.430 read: IOPS=89, BW=11.2MiB/s (11.8MB/s)(113MiB/10054msec) 00:41:33.430 slat (nsec): min=5431, max=37517, avg=16592.30, stdev=2329.72 00:41:33.430 clat (usec): min=14099, max=77010, avg=33367.18, stdev=6421.78 00:41:33.430 lat (usec): min=14116, max=77026, avg=33383.77, stdev=6421.65 00:41:33.430 clat percentiles (usec): 00:41:33.430 | 1.00th=[16909], 5.00th=[21365], 10.00th=[24249], 20.00th=[27657], 00:41:33.430 | 30.00th=[30802], 40.00th=[33817], 50.00th=[34866], 60.00th=[35914], 00:41:33.430 | 70.00th=[36963], 80.00th=[38011], 90.00th=[39584], 95.00th=[41157], 00:41:33.430 | 99.00th=[43254], 99.50th=[44303], 99.90th=[77071], 99.95th=[77071], 00:41:33.430 | 99.99th=[77071] 00:41:33.430 bw ( KiB/s): min= 9984, max=15616, per=32.20%, avg=11507.20, stdev=1258.19, samples=20 00:41:33.430 iops : min= 78, max= 122, avg=89.90, stdev= 9.83, samples=20 00:41:33.430 lat (msec) : 20=3.10%, 50=96.67%, 100=0.22% 00:41:33.430 cpu : usr=94.90%, sys=4.56%, ctx=20, majf=0, minf=57 00:41:33.430 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:33.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.430 issued rwts: total=902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.430 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:33.430 00:41:33.430 Run status group 0 (all jobs): 00:41:33.430 READ: bw=34.9MiB/s (36.6MB/s), 10.5MiB/s-13.2MiB/s (11.0MB/s-13.8MB/s), io=351MiB (368MB), run=10054-10058msec 00:41:33.430 15:37:19 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:41:33.430 15:37:19 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:41:33.430 15:37:19 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:41:33.430 15:37:19 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:33.430 15:37:19 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:41:33.430 15:37:19 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:33.430 15:37:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:33.430 15:37:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:33.430 15:37:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:33.430 15:37:19 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:33.430 15:37:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:33.430 15:37:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:33.430 15:37:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:33.430 00:41:33.430 real 0m11.815s 00:41:33.430 user 0m30.122s 00:41:33.430 sys 0m1.956s 00:41:33.430 15:37:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:33.430 15:37:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:33.430 ************************************ 00:41:33.430 END TEST fio_dif_digest 00:41:33.430 ************************************ 00:41:33.430 15:37:19 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:41:33.430 15:37:19 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:41:33.430 15:37:19 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:33.430 15:37:19 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:41:33.430 15:37:19 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:33.430 15:37:19 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:41:33.430 15:37:19 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:33.430 15:37:19 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:33.430 rmmod nvme_tcp 00:41:33.430 rmmod nvme_fabrics 00:41:33.430 rmmod nvme_keyring 00:41:33.430 15:37:19 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:33.430 15:37:19 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:41:33.430 15:37:19 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:41:33.430 15:37:19 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3387047 ']' 00:41:33.430 15:37:19 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3387047 00:41:33.430 15:37:19 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 3387047 ']' 00:41:33.430 15:37:19 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 3387047 00:41:33.430 15:37:19 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:41:33.430 15:37:19 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:33.430 15:37:19 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3387047 00:41:33.430 15:37:19 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:33.430 15:37:19 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:33.430 15:37:19 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3387047' 00:41:33.430 killing process with pid 3387047 00:41:33.430 15:37:19 nvmf_dif -- common/autotest_common.sh@969 -- # kill 3387047 00:41:33.430 15:37:19 nvmf_dif -- common/autotest_common.sh@974 -- # wait 3387047 00:41:33.430 15:37:19 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:41:33.430 15:37:19 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:34.815 Waiting for block devices as requested 00:41:34.815 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:41:34.815 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:35.094 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:35.094 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:35.094 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:35.354 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:35.354 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:35.354 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:35.354 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:35.614 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:35.614 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:35.614 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:35.875 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:35.875 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:35.875 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:36.135 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:36.136 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:36.136 15:37:22 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:36.136 15:37:22 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:36.136 15:37:22 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:41:36.136 15:37:22 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:41:36.136 15:37:22 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:36.136 15:37:22 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:41:36.136 15:37:22 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:36.136 15:37:22 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:36.136 15:37:22 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:36.136 15:37:22 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:36.136 15:37:22 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:38.682 15:37:25 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:38.682 00:41:38.682 real 1m13.580s 00:41:38.682 user 6m41.976s 00:41:38.682 sys 0m19.013s 00:41:38.682 15:37:25 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:38.682 15:37:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:38.682 ************************************ 00:41:38.682 END TEST nvmf_dif 00:41:38.682 ************************************ 00:41:38.682 15:37:25 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:38.682 15:37:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:38.682 15:37:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:38.682 15:37:25 -- common/autotest_common.sh@10 -- # set +x 00:41:38.682 ************************************ 00:41:38.682 START TEST nvmf_abort_qd_sizes 00:41:38.682 ************************************ 00:41:38.682 15:37:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:38.682 * Looking for test storage... 00:41:38.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:38.682 15:37:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:41:38.682 15:37:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1689 -- # lcov --version 00:41:38.682 15:37:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:41:38.682 15:37:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:41:38.682 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:38.682 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:38.682 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:38.682 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:41:38.682 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:41:38.682 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:41:38.682 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:41:38.682 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:41:38.682 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:41:38.682 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:41:38.682 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:38.682 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:41:38.682 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:41:38.682 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:38.682 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:38.682 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:41:38.682 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:41:38.682 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:38.682 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:41:38.682 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:41:38.682 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:41:38.682 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:41:38.682 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:38.682 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:41:38.682 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:41:38.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:38.683 --rc genhtml_branch_coverage=1 00:41:38.683 --rc genhtml_function_coverage=1 00:41:38.683 --rc genhtml_legend=1 00:41:38.683 --rc geninfo_all_blocks=1 00:41:38.683 --rc geninfo_unexecuted_blocks=1 00:41:38.683 00:41:38.683 ' 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:41:38.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:38.683 --rc genhtml_branch_coverage=1 00:41:38.683 --rc genhtml_function_coverage=1 00:41:38.683 --rc genhtml_legend=1 00:41:38.683 --rc geninfo_all_blocks=1 00:41:38.683 --rc geninfo_unexecuted_blocks=1 00:41:38.683 00:41:38.683 ' 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:41:38.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:38.683 --rc genhtml_branch_coverage=1 00:41:38.683 --rc genhtml_function_coverage=1 00:41:38.683 --rc genhtml_legend=1 00:41:38.683 --rc geninfo_all_blocks=1 00:41:38.683 --rc geninfo_unexecuted_blocks=1 00:41:38.683 00:41:38.683 ' 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:41:38.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:38.683 --rc genhtml_branch_coverage=1 00:41:38.683 --rc genhtml_function_coverage=1 00:41:38.683 --rc genhtml_legend=1 00:41:38.683 --rc geninfo_all_blocks=1 00:41:38.683 --rc geninfo_unexecuted_blocks=1 00:41:38.683 00:41:38.683 ' 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:38.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:41:38.683 15:37:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:41:41.224 Found 0000:84:00.0 (0x8086 - 0x159b) 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:41:41.224 Found 0000:84:00.1 (0x8086 - 0x159b) 00:41:41.224 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:41:41.225 Found net devices under 0000:84:00.0: cvl_0_0 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:41:41.225 Found net devices under 0000:84:00.1: cvl_0_1 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:41.225 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:41.484 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:41.484 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:41.484 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:41.484 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:41.484 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:41.484 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:41.484 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:41.484 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:41.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:41.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:41:41.484 00:41:41.484 --- 10.0.0.2 ping statistics --- 00:41:41.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:41.484 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:41:41.484 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:41.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:41.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:41:41.485 00:41:41.485 --- 10.0.0.1 ping statistics --- 00:41:41.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:41.485 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:41:41.485 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:41.485 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:41:41.485 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:41:41.485 15:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:43.392 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:41:43.392 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:41:43.392 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:41:43.392 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:41:43.392 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:41:43.392 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:41:43.392 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:41:43.392 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:41:43.392 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:41:43.392 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:41:43.392 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:41:43.392 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:41:43.392 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:41:43.392 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:41:43.392 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:41:43.392 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:41:44.330 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:41:44.330 15:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:44.330 15:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:44.330 15:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:44.330 15:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:44.330 15:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:44.330 15:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:44.588 15:37:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:41:44.588 15:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:44.588 15:37:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:44.588 15:37:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:44.588 15:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3398306 00:41:44.588 15:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:41:44.588 15:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3398306 00:41:44.588 15:37:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 3398306 ']' 00:41:44.588 15:37:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:44.588 15:37:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:44.588 15:37:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:44.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:44.588 15:37:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:44.588 15:37:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:44.588 [2024-10-28 15:37:31.276782] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:41:44.588 [2024-10-28 15:37:31.276872] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:44.588 [2024-10-28 15:37:31.410862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:44.845 [2024-10-28 15:37:31.532633] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:44.845 [2024-10-28 15:37:31.532762] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:44.845 [2024-10-28 15:37:31.532799] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:44.845 [2024-10-28 15:37:31.532828] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:44.845 [2024-10-28 15:37:31.532853] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:44.845 [2024-10-28 15:37:31.536097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:44.845 [2024-10-28 15:37:31.536196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:44.845 [2024-10-28 15:37:31.536293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:44.845 [2024-10-28 15:37:31.536297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:45.785 15:37:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:45.785 15:37:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:41:45.785 15:37:32 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:45.785 15:37:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:45.785 15:37:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:46.044 15:37:32 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:46.044 15:37:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:41:46.044 15:37:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:41:46.044 15:37:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:41:46.044 15:37:32 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:41:46.044 15:37:32 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:41:46.044 15:37:32 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:82:00.0 ]] 00:41:46.044 15:37:32 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:41:46.044 15:37:32 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:41:46.044 15:37:32 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:82:00.0 ]] 00:41:46.044 15:37:32 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:41:46.044 15:37:32 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:41:46.044 15:37:32 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:41:46.044 15:37:32 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:41:46.044 15:37:32 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:82:00.0 00:41:46.044 15:37:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:41:46.044 15:37:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:82:00.0 00:41:46.044 15:37:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:41:46.044 15:37:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:46.044 15:37:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:46.044 15:37:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:46.044 ************************************ 00:41:46.044 START TEST spdk_target_abort 00:41:46.044 ************************************ 00:41:46.044 15:37:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:41:46.044 15:37:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:41:46.044 15:37:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:82:00.0 -b spdk_target 00:41:46.044 15:37:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:46.044 15:37:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:49.323 spdk_targetn1 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:49.323 [2024-10-28 15:37:35.558236] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:49.323 [2024-10-28 15:37:35.606992] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:49.323 15:37:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:52.604 Initializing NVMe Controllers 00:41:52.604 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:52.604 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:52.604 Initialization complete. Launching workers. 00:41:52.604 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11534, failed: 0 00:41:52.604 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1247, failed to submit 10287 00:41:52.604 success 704, unsuccessful 543, failed 0 00:41:52.604 15:37:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:52.604 15:37:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:55.992 Initializing NVMe Controllers 00:41:55.992 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:55.992 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:55.992 Initialization complete. Launching workers. 00:41:55.992 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8669, failed: 0 00:41:55.992 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1234, failed to submit 7435 00:41:55.992 success 346, unsuccessful 888, failed 0 00:41:55.992 15:37:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:55.992 15:37:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:59.271 Initializing NVMe Controllers 00:41:59.271 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:59.271 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:59.271 Initialization complete. Launching workers. 00:41:59.271 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31526, failed: 0 00:41:59.271 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2840, failed to submit 28686 00:41:59.271 success 525, unsuccessful 2315, failed 0 00:41:59.271 15:37:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:41:59.271 15:37:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:59.271 15:37:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:59.271 15:37:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:59.271 15:37:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:41:59.271 15:37:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:59.271 15:37:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:00.204 15:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:00.204 15:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3398306 00:42:00.204 15:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 3398306 ']' 00:42:00.204 15:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 3398306 00:42:00.204 15:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:42:00.204 15:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:00.204 15:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3398306 00:42:00.462 15:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:00.462 15:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:00.462 15:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3398306' 00:42:00.462 killing process with pid 3398306 00:42:00.462 15:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 3398306 00:42:00.462 15:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 3398306 00:42:00.721 00:42:00.721 real 0m14.698s 00:42:00.721 user 0m59.238s 00:42:00.721 sys 0m3.193s 00:42:00.721 15:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:00.721 15:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:00.721 ************************************ 00:42:00.721 END TEST spdk_target_abort 00:42:00.721 ************************************ 00:42:00.721 15:37:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:42:00.721 15:37:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:00.721 15:37:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:00.721 15:37:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:00.721 ************************************ 00:42:00.721 START TEST kernel_target_abort 00:42:00.721 ************************************ 00:42:00.721 15:37:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:42:00.721 15:37:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:42:00.721 15:37:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:42:00.721 15:37:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:42:00.721 15:37:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:42:00.721 15:37:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:00.721 15:37:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:00.721 15:37:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:42:00.721 15:37:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:00.721 15:37:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:42:00.721 15:37:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:42:00.721 15:37:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:42:00.721 15:37:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:42:00.721 15:37:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:42:00.721 15:37:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:42:00.721 15:37:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:00.721 15:37:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:00.721 15:37:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:42:00.721 15:37:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:42:00.721 15:37:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:42:00.721 15:37:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:42:00.721 15:37:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:42:00.721 15:37:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:02.097 Waiting for block devices as requested 00:42:02.356 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:42:02.356 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:02.615 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:02.615 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:02.615 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:02.875 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:02.875 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:02.875 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:02.875 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:03.133 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:03.133 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:03.133 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:03.392 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:03.392 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:03.392 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:03.392 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:03.651 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:03.651 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:42:03.651 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:42:03.651 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:42:03.651 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1646 -- # local device=nvme0n1 00:42:03.651 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:42:03.651 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:42:03.651 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:42:03.651 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:42:03.651 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:42:03.910 No valid GPT data, bailing 00:42:03.910 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:42:03.910 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:42:03.910 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:42:03.910 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:42:03.910 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:42:03.910 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:03.910 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:03.910 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:42:03.910 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:42:03.910 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:42:03.910 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:42:03.910 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:42:03.910 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:42:03.910 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:42:03.910 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:42:03.910 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:42:03.910 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:42:03.910 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:42:03.910 00:42:03.910 Discovery Log Number of Records 2, Generation counter 2 00:42:03.910 =====Discovery Log Entry 0====== 00:42:03.910 trtype: tcp 00:42:03.910 adrfam: ipv4 00:42:03.910 subtype: current discovery subsystem 00:42:03.910 treq: not specified, sq flow control disable supported 00:42:03.910 portid: 1 00:42:03.910 trsvcid: 4420 00:42:03.910 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:42:03.910 traddr: 10.0.0.1 00:42:03.910 eflags: none 00:42:03.910 sectype: none 00:42:03.910 =====Discovery Log Entry 1====== 00:42:03.910 trtype: tcp 00:42:03.910 adrfam: ipv4 00:42:03.910 subtype: nvme subsystem 00:42:03.910 treq: not specified, sq flow control disable supported 00:42:03.910 portid: 1 00:42:03.910 trsvcid: 4420 00:42:03.910 subnqn: nqn.2016-06.io.spdk:testnqn 00:42:03.910 traddr: 10.0.0.1 00:42:03.910 eflags: none 00:42:03.910 sectype: none 00:42:03.910 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:42:03.910 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:03.911 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:03.911 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:42:03.911 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:03.911 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:03.911 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:03.911 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:03.911 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:03.911 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:03.911 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:03.911 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:03.911 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:03.911 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:03.911 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:42:03.911 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:03.911 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:42:03.911 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:03.911 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:03.911 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:03.911 15:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:07.209 Initializing NVMe Controllers 00:42:07.209 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:07.209 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:07.209 Initialization complete. Launching workers. 00:42:07.209 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 19709, failed: 0 00:42:07.209 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19709, failed to submit 0 00:42:07.209 success 0, unsuccessful 19709, failed 0 00:42:07.209 15:37:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:07.209 15:37:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:10.507 Initializing NVMe Controllers 00:42:10.507 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:10.507 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:10.507 Initialization complete. Launching workers. 00:42:10.507 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 36697, failed: 0 00:42:10.507 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 9138, failed to submit 27559 00:42:10.507 success 0, unsuccessful 9138, failed 0 00:42:10.507 15:37:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:10.508 15:37:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:13.792 Initializing NVMe Controllers 00:42:13.792 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:13.792 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:13.792 Initialization complete. Launching workers. 00:42:13.792 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 48887, failed: 0 00:42:13.792 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 12198, failed to submit 36689 00:42:13.792 success 0, unsuccessful 12198, failed 0 00:42:13.792 15:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:42:13.792 15:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:42:13.792 15:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:42:13.792 15:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:13.792 15:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:13.792 15:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:42:13.792 15:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:13.792 15:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:42:13.792 15:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:42:13.792 15:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:15.700 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:15.700 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:15.700 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:15.700 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:15.700 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:15.700 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:15.700 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:15.700 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:15.700 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:15.700 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:15.700 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:15.700 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:15.700 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:15.700 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:15.700 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:15.700 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:16.272 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:42:16.533 00:42:16.533 real 0m15.801s 00:42:16.533 user 0m6.948s 00:42:16.533 sys 0m4.158s 00:42:16.533 15:38:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:16.533 15:38:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:16.533 ************************************ 00:42:16.533 END TEST kernel_target_abort 00:42:16.533 ************************************ 00:42:16.533 15:38:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:42:16.533 15:38:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:42:16.533 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:16.533 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:42:16.533 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:16.533 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:42:16.533 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:16.533 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:16.533 rmmod nvme_tcp 00:42:16.533 rmmod nvme_fabrics 00:42:16.533 rmmod nvme_keyring 00:42:16.533 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:16.533 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:42:16.533 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:42:16.533 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3398306 ']' 00:42:16.533 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3398306 00:42:16.533 15:38:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 3398306 ']' 00:42:16.533 15:38:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 3398306 00:42:16.533 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3398306) - No such process 00:42:16.533 15:38:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 3398306 is not found' 00:42:16.533 Process with pid 3398306 is not found 00:42:16.533 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:42:16.533 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:18.443 Waiting for block devices as requested 00:42:18.443 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:42:18.443 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:18.702 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:18.702 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:18.702 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:18.962 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:18.962 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:18.962 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:19.220 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:19.220 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:19.220 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:19.479 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:19.479 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:19.479 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:19.479 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:19.738 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:19.738 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:19.738 15:38:06 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:19.738 15:38:06 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:19.738 15:38:06 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:42:19.738 15:38:06 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:42:19.738 15:38:06 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:19.738 15:38:06 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:42:19.738 15:38:06 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:19.738 15:38:06 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:19.738 15:38:06 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:19.738 15:38:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:19.738 15:38:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:22.275 15:38:08 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:22.275 00:42:22.275 real 0m43.549s 00:42:22.275 user 1m9.585s 00:42:22.275 sys 0m12.703s 00:42:22.275 15:38:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:22.275 15:38:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:22.275 ************************************ 00:42:22.275 END TEST nvmf_abort_qd_sizes 00:42:22.275 ************************************ 00:42:22.275 15:38:08 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:22.275 15:38:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:22.275 15:38:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:22.275 15:38:08 -- common/autotest_common.sh@10 -- # set +x 00:42:22.275 ************************************ 00:42:22.275 START TEST keyring_file 00:42:22.275 ************************************ 00:42:22.275 15:38:08 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:22.275 * Looking for test storage... 00:42:22.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:22.275 15:38:08 keyring_file -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:42:22.275 15:38:08 keyring_file -- common/autotest_common.sh@1689 -- # lcov --version 00:42:22.275 15:38:08 keyring_file -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:42:22.275 15:38:08 keyring_file -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:42:22.275 15:38:08 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:22.275 15:38:08 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:22.275 15:38:08 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:22.275 15:38:08 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:42:22.275 15:38:08 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:42:22.275 15:38:08 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:42:22.275 15:38:08 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:42:22.275 15:38:08 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:42:22.275 15:38:08 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:42:22.275 15:38:08 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:42:22.275 15:38:08 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:22.275 15:38:08 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:42:22.275 15:38:08 keyring_file -- scripts/common.sh@345 -- # : 1 00:42:22.275 15:38:08 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:22.275 15:38:08 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:22.275 15:38:08 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:42:22.275 15:38:08 keyring_file -- scripts/common.sh@353 -- # local d=1 00:42:22.275 15:38:08 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:22.275 15:38:08 keyring_file -- scripts/common.sh@355 -- # echo 1 00:42:22.275 15:38:08 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:42:22.275 15:38:08 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:42:22.275 15:38:08 keyring_file -- scripts/common.sh@353 -- # local d=2 00:42:22.275 15:38:08 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:22.275 15:38:08 keyring_file -- scripts/common.sh@355 -- # echo 2 00:42:22.275 15:38:08 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:42:22.275 15:38:08 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:22.275 15:38:08 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:22.275 15:38:08 keyring_file -- scripts/common.sh@368 -- # return 0 00:42:22.275 15:38:08 keyring_file -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:22.275 15:38:08 keyring_file -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:42:22.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:22.275 --rc genhtml_branch_coverage=1 00:42:22.275 --rc genhtml_function_coverage=1 00:42:22.275 --rc genhtml_legend=1 00:42:22.275 --rc geninfo_all_blocks=1 00:42:22.275 --rc geninfo_unexecuted_blocks=1 00:42:22.275 00:42:22.275 ' 00:42:22.275 15:38:08 keyring_file -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:42:22.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:22.275 --rc genhtml_branch_coverage=1 00:42:22.275 --rc genhtml_function_coverage=1 00:42:22.275 --rc genhtml_legend=1 00:42:22.275 --rc geninfo_all_blocks=1 00:42:22.275 --rc geninfo_unexecuted_blocks=1 00:42:22.275 00:42:22.275 ' 00:42:22.275 15:38:08 keyring_file -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:42:22.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:22.275 --rc genhtml_branch_coverage=1 00:42:22.275 --rc genhtml_function_coverage=1 00:42:22.275 --rc genhtml_legend=1 00:42:22.275 --rc geninfo_all_blocks=1 00:42:22.275 --rc geninfo_unexecuted_blocks=1 00:42:22.275 00:42:22.275 ' 00:42:22.275 15:38:08 keyring_file -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:42:22.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:22.275 --rc genhtml_branch_coverage=1 00:42:22.275 --rc genhtml_function_coverage=1 00:42:22.275 --rc genhtml_legend=1 00:42:22.275 --rc geninfo_all_blocks=1 00:42:22.275 --rc geninfo_unexecuted_blocks=1 00:42:22.275 00:42:22.275 ' 00:42:22.275 15:38:08 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:22.275 15:38:08 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:22.275 15:38:08 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:42:22.275 15:38:08 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:22.275 15:38:08 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:22.275 15:38:08 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:22.275 15:38:08 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:22.275 15:38:08 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:22.275 15:38:08 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:22.275 15:38:08 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:22.275 15:38:08 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:22.275 15:38:08 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:22.275 15:38:09 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:22.275 15:38:09 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:42:22.275 15:38:09 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:42:22.275 15:38:09 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:22.275 15:38:09 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:22.275 15:38:09 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:22.275 15:38:09 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:22.275 15:38:09 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:22.275 15:38:09 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:42:22.275 15:38:09 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:22.275 15:38:09 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:22.275 15:38:09 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:22.275 15:38:09 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:22.275 15:38:09 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:22.275 15:38:09 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:22.275 15:38:09 keyring_file -- paths/export.sh@5 -- # export PATH 00:42:22.276 15:38:09 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:22.276 15:38:09 keyring_file -- nvmf/common.sh@51 -- # : 0 00:42:22.276 15:38:09 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:22.276 15:38:09 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:22.276 15:38:09 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:22.276 15:38:09 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:22.276 15:38:09 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:22.276 15:38:09 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:22.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:22.276 15:38:09 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:22.276 15:38:09 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:22.276 15:38:09 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:22.276 15:38:09 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:22.276 15:38:09 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:22.276 15:38:09 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:22.276 15:38:09 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:42:22.276 15:38:09 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:42:22.276 15:38:09 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:42:22.276 15:38:09 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:22.276 15:38:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:22.276 15:38:09 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:22.276 15:38:09 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:22.276 15:38:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:22.276 15:38:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:22.276 15:38:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.NCA1ysui1z 00:42:22.276 15:38:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:22.276 15:38:09 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:22.276 15:38:09 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:22.276 15:38:09 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:22.276 15:38:09 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:22.276 15:38:09 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:22.276 15:38:09 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:22.276 15:38:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.NCA1ysui1z 00:42:22.276 15:38:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.NCA1ysui1z 00:42:22.276 15:38:09 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.NCA1ysui1z 00:42:22.276 15:38:09 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:42:22.276 15:38:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:22.535 15:38:09 keyring_file -- keyring/common.sh@17 -- # name=key1 00:42:22.535 15:38:09 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:22.535 15:38:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:22.535 15:38:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:22.535 15:38:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.zkvJyGxHto 00:42:22.535 15:38:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:22.535 15:38:09 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:22.535 15:38:09 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:22.535 15:38:09 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:22.535 15:38:09 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:42:22.535 15:38:09 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:22.535 15:38:09 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:22.535 15:38:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.zkvJyGxHto 00:42:22.536 15:38:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.zkvJyGxHto 00:42:22.536 15:38:09 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.zkvJyGxHto 00:42:22.536 15:38:09 keyring_file -- keyring/file.sh@30 -- # tgtpid=3404342 00:42:22.536 15:38:09 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:22.536 15:38:09 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3404342 00:42:22.536 15:38:09 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3404342 ']' 00:42:22.536 15:38:09 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:22.536 15:38:09 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:22.536 15:38:09 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:22.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:22.536 15:38:09 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:22.536 15:38:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:22.536 [2024-10-28 15:38:09.379437] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:42:22.536 [2024-10-28 15:38:09.379621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3404342 ] 00:42:22.795 [2024-10-28 15:38:09.542808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:22.795 [2024-10-28 15:38:09.659642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:23.363 15:38:10 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:23.363 15:38:10 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:42:23.363 15:38:10 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:42:23.363 15:38:10 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:23.363 15:38:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:23.363 [2024-10-28 15:38:10.110726] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:23.363 null0 00:42:23.363 [2024-10-28 15:38:10.142930] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:23.363 [2024-10-28 15:38:10.143755] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:23.363 15:38:10 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:23.363 15:38:10 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:23.363 15:38:10 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:23.363 15:38:10 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:23.363 15:38:10 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:42:23.363 15:38:10 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:23.363 15:38:10 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:42:23.363 15:38:10 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:23.363 15:38:10 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:23.363 15:38:10 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:23.363 15:38:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:23.363 [2024-10-28 15:38:10.170843] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:42:23.363 request: 00:42:23.363 { 00:42:23.363 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:42:23.363 "secure_channel": false, 00:42:23.363 "listen_address": { 00:42:23.363 "trtype": "tcp", 00:42:23.363 "traddr": "127.0.0.1", 00:42:23.363 "trsvcid": "4420" 00:42:23.363 }, 00:42:23.363 "method": "nvmf_subsystem_add_listener", 00:42:23.363 "req_id": 1 00:42:23.363 } 00:42:23.363 Got JSON-RPC error response 00:42:23.363 response: 00:42:23.363 { 00:42:23.363 "code": -32602, 00:42:23.363 "message": "Invalid parameters" 00:42:23.363 } 00:42:23.363 15:38:10 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:42:23.363 15:38:10 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:23.363 15:38:10 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:23.363 15:38:10 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:23.363 15:38:10 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:23.363 15:38:10 keyring_file -- keyring/file.sh@47 -- # bperfpid=3404475 00:42:23.363 15:38:10 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:42:23.363 15:38:10 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3404475 /var/tmp/bperf.sock 00:42:23.363 15:38:10 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3404475 ']' 00:42:23.363 15:38:10 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:23.363 15:38:10 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:23.363 15:38:10 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:23.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:23.363 15:38:10 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:23.363 15:38:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:23.622 [2024-10-28 15:38:10.272080] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:42:23.622 [2024-10-28 15:38:10.272233] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3404475 ] 00:42:23.622 [2024-10-28 15:38:10.417434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:23.881 [2024-10-28 15:38:10.531783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:24.816 15:38:11 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:24.816 15:38:11 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:42:24.816 15:38:11 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NCA1ysui1z 00:42:24.816 15:38:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NCA1ysui1z 00:42:25.075 15:38:11 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.zkvJyGxHto 00:42:25.075 15:38:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.zkvJyGxHto 00:42:25.644 15:38:12 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:42:25.644 15:38:12 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:42:25.644 15:38:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:25.644 15:38:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:25.644 15:38:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:26.210 15:38:13 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.NCA1ysui1z == \/\t\m\p\/\t\m\p\.\N\C\A\1\y\s\u\i\1\z ]] 00:42:26.210 15:38:13 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:42:26.210 15:38:13 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:42:26.210 15:38:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:26.210 15:38:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:26.210 15:38:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:27.147 15:38:13 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.zkvJyGxHto == \/\t\m\p\/\t\m\p\.\z\k\v\J\y\G\x\H\t\o ]] 00:42:27.147 15:38:13 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:42:27.147 15:38:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:27.147 15:38:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:27.147 15:38:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:27.147 15:38:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:27.147 15:38:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:27.715 15:38:14 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:42:27.715 15:38:14 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:42:27.715 15:38:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:27.715 15:38:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:27.715 15:38:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:27.715 15:38:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:27.715 15:38:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:28.282 15:38:14 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:42:28.282 15:38:14 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:28.282 15:38:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:28.542 [2024-10-28 15:38:15.337550] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:28.800 nvme0n1 00:42:28.800 15:38:15 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:42:28.800 15:38:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:28.800 15:38:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:28.800 15:38:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:28.800 15:38:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:28.801 15:38:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:29.060 15:38:15 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:42:29.060 15:38:15 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:42:29.060 15:38:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:29.060 15:38:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:29.060 15:38:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:29.060 15:38:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:29.060 15:38:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:29.999 15:38:16 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:42:29.999 15:38:16 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:29.999 Running I/O for 1 seconds... 00:42:31.377 3838.00 IOPS, 14.99 MiB/s 00:42:31.377 Latency(us) 00:42:31.377 [2024-10-28T14:38:18.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:31.377 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:42:31.377 nvme0n1 : 1.02 3882.47 15.17 0.00 0.00 32717.46 8398.32 43690.67 00:42:31.377 [2024-10-28T14:38:18.244Z] =================================================================================================================== 00:42:31.377 [2024-10-28T14:38:18.244Z] Total : 3882.47 15.17 0.00 0.00 32717.46 8398.32 43690.67 00:42:31.377 { 00:42:31.377 "results": [ 00:42:31.377 { 00:42:31.377 "job": "nvme0n1", 00:42:31.377 "core_mask": "0x2", 00:42:31.377 "workload": "randrw", 00:42:31.377 "percentage": 50, 00:42:31.377 "status": "finished", 00:42:31.377 "queue_depth": 128, 00:42:31.377 "io_size": 4096, 00:42:31.377 "runtime": 1.021772, 00:42:31.377 "iops": 3882.470844767717, 00:42:31.377 "mibps": 15.165901737373895, 00:42:31.377 "io_failed": 0, 00:42:31.377 "io_timeout": 0, 00:42:31.377 "avg_latency_us": 32717.460142471686, 00:42:31.377 "min_latency_us": 8398.317037037037, 00:42:31.377 "max_latency_us": 43690.666666666664 00:42:31.377 } 00:42:31.377 ], 00:42:31.377 "core_count": 1 00:42:31.377 } 00:42:31.377 15:38:17 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:31.377 15:38:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:31.378 15:38:18 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:42:31.378 15:38:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:31.378 15:38:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:31.378 15:38:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:31.378 15:38:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:31.378 15:38:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:32.312 15:38:18 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:42:32.312 15:38:18 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:42:32.312 15:38:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:32.312 15:38:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:32.312 15:38:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:32.312 15:38:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:32.312 15:38:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:32.570 15:38:19 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:42:32.570 15:38:19 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:32.570 15:38:19 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:32.570 15:38:19 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:32.570 15:38:19 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:32.570 15:38:19 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:32.570 15:38:19 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:32.570 15:38:19 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:32.570 15:38:19 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:32.570 15:38:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:33.135 [2024-10-28 15:38:19.775153] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:33.135 [2024-10-28 15:38:19.775363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e035a0 (107): Transport endpoint is not connected 00:42:33.135 [2024-10-28 15:38:19.776354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e035a0 (9): Bad file descriptor 00:42:33.135 [2024-10-28 15:38:19.777356] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:42:33.135 [2024-10-28 15:38:19.777380] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:33.135 [2024-10-28 15:38:19.777396] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:33.135 [2024-10-28 15:38:19.777416] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:42:33.135 request: 00:42:33.135 { 00:42:33.135 "name": "nvme0", 00:42:33.135 "trtype": "tcp", 00:42:33.135 "traddr": "127.0.0.1", 00:42:33.135 "adrfam": "ipv4", 00:42:33.135 "trsvcid": "4420", 00:42:33.135 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:33.135 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:33.135 "prchk_reftag": false, 00:42:33.135 "prchk_guard": false, 00:42:33.135 "hdgst": false, 00:42:33.136 "ddgst": false, 00:42:33.136 "psk": "key1", 00:42:33.136 "allow_unrecognized_csi": false, 00:42:33.136 "method": "bdev_nvme_attach_controller", 00:42:33.136 "req_id": 1 00:42:33.136 } 00:42:33.136 Got JSON-RPC error response 00:42:33.136 response: 00:42:33.136 { 00:42:33.136 "code": -5, 00:42:33.136 "message": "Input/output error" 00:42:33.136 } 00:42:33.136 15:38:19 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:33.136 15:38:19 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:33.136 15:38:19 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:33.136 15:38:19 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:33.136 15:38:19 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:42:33.136 15:38:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:33.136 15:38:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:33.136 15:38:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:33.136 15:38:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:33.136 15:38:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:33.425 15:38:20 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:42:33.425 15:38:20 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:42:33.425 15:38:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:33.425 15:38:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:33.425 15:38:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:33.425 15:38:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:33.425 15:38:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:34.019 15:38:20 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:42:34.019 15:38:20 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:42:34.019 15:38:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:34.587 15:38:21 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:42:34.587 15:38:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:42:35.155 15:38:21 keyring_file -- keyring/file.sh@78 -- # jq length 00:42:35.155 15:38:21 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:42:35.155 15:38:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:35.722 15:38:22 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:42:35.722 15:38:22 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.NCA1ysui1z 00:42:35.722 15:38:22 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.NCA1ysui1z 00:42:35.722 15:38:22 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:35.722 15:38:22 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.NCA1ysui1z 00:42:35.722 15:38:22 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:35.722 15:38:22 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:35.722 15:38:22 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:35.722 15:38:22 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:35.722 15:38:22 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NCA1ysui1z 00:42:35.722 15:38:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NCA1ysui1z 00:42:35.982 [2024-10-28 15:38:22.848247] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.NCA1ysui1z': 0100660 00:42:35.982 [2024-10-28 15:38:22.848296] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:42:36.242 request: 00:42:36.242 { 00:42:36.242 "name": "key0", 00:42:36.242 "path": "/tmp/tmp.NCA1ysui1z", 00:42:36.242 "method": "keyring_file_add_key", 00:42:36.242 "req_id": 1 00:42:36.242 } 00:42:36.242 Got JSON-RPC error response 00:42:36.242 response: 00:42:36.242 { 00:42:36.242 "code": -1, 00:42:36.242 "message": "Operation not permitted" 00:42:36.242 } 00:42:36.242 15:38:22 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:36.242 15:38:22 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:36.242 15:38:22 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:36.242 15:38:22 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:36.242 15:38:22 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.NCA1ysui1z 00:42:36.242 15:38:22 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NCA1ysui1z 00:42:36.242 15:38:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NCA1ysui1z 00:42:36.812 15:38:23 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.NCA1ysui1z 00:42:36.812 15:38:23 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:42:36.812 15:38:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:36.812 15:38:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:36.812 15:38:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:36.812 15:38:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:36.812 15:38:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:37.381 15:38:24 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:42:37.381 15:38:24 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:37.381 15:38:24 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:37.381 15:38:24 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:37.381 15:38:24 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:37.381 15:38:24 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:37.381 15:38:24 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:37.381 15:38:24 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:37.381 15:38:24 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:37.381 15:38:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:37.951 [2024-10-28 15:38:24.661448] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.NCA1ysui1z': No such file or directory 00:42:37.951 [2024-10-28 15:38:24.661535] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:42:37.951 [2024-10-28 15:38:24.661595] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:42:37.951 [2024-10-28 15:38:24.661627] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:42:37.951 [2024-10-28 15:38:24.661680] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:42:37.951 [2024-10-28 15:38:24.661714] bdev_nvme.c:6576:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:42:37.951 request: 00:42:37.951 { 00:42:37.951 "name": "nvme0", 00:42:37.951 "trtype": "tcp", 00:42:37.951 "traddr": "127.0.0.1", 00:42:37.951 "adrfam": "ipv4", 00:42:37.951 "trsvcid": "4420", 00:42:37.951 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:37.951 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:37.951 "prchk_reftag": false, 00:42:37.951 "prchk_guard": false, 00:42:37.951 "hdgst": false, 00:42:37.951 "ddgst": false, 00:42:37.951 "psk": "key0", 00:42:37.951 "allow_unrecognized_csi": false, 00:42:37.951 "method": "bdev_nvme_attach_controller", 00:42:37.951 "req_id": 1 00:42:37.951 } 00:42:37.951 Got JSON-RPC error response 00:42:37.951 response: 00:42:37.951 { 00:42:37.951 "code": -19, 00:42:37.951 "message": "No such device" 00:42:37.951 } 00:42:37.951 15:38:24 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:37.952 15:38:24 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:37.952 15:38:24 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:37.952 15:38:24 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:37.952 15:38:24 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:42:37.952 15:38:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:38.211 15:38:24 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:38.211 15:38:24 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:38.211 15:38:24 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:38.211 15:38:24 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:38.211 15:38:24 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:38.211 15:38:24 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:38.211 15:38:24 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.pukQ2WBmfy 00:42:38.211 15:38:24 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:38.211 15:38:24 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:38.211 15:38:24 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:38.211 15:38:24 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:38.211 15:38:24 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:38.211 15:38:24 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:38.211 15:38:24 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:38.211 15:38:25 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.pukQ2WBmfy 00:42:38.211 15:38:25 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.pukQ2WBmfy 00:42:38.211 15:38:25 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.pukQ2WBmfy 00:42:38.211 15:38:25 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pukQ2WBmfy 00:42:38.211 15:38:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pukQ2WBmfy 00:42:38.783 15:38:25 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:38.783 15:38:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:39.353 nvme0n1 00:42:39.353 15:38:26 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:42:39.353 15:38:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:39.353 15:38:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:39.353 15:38:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:39.353 15:38:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:39.353 15:38:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:39.924 15:38:26 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:42:39.924 15:38:26 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:42:39.924 15:38:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:40.494 15:38:27 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:42:40.494 15:38:27 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:42:40.494 15:38:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:40.494 15:38:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:40.494 15:38:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:40.753 15:38:27 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:42:40.753 15:38:27 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:42:40.753 15:38:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:40.753 15:38:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:40.753 15:38:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:40.753 15:38:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:40.753 15:38:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:41.322 15:38:27 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:42:41.322 15:38:27 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:41.322 15:38:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:41.890 15:38:28 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:42:41.890 15:38:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:41.890 15:38:28 keyring_file -- keyring/file.sh@105 -- # jq length 00:42:42.151 15:38:28 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:42:42.151 15:38:28 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pukQ2WBmfy 00:42:42.151 15:38:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pukQ2WBmfy 00:42:42.411 15:38:29 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.zkvJyGxHto 00:42:42.411 15:38:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.zkvJyGxHto 00:42:43.350 15:38:29 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:43.351 15:38:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:43.611 nvme0n1 00:42:43.611 15:38:30 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:42:43.611 15:38:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:42:44.177 15:38:31 keyring_file -- keyring/file.sh@113 -- # config='{ 00:42:44.177 "subsystems": [ 00:42:44.177 { 00:42:44.177 "subsystem": "keyring", 00:42:44.177 "config": [ 00:42:44.177 { 00:42:44.177 "method": "keyring_file_add_key", 00:42:44.177 "params": { 00:42:44.177 "name": "key0", 00:42:44.177 "path": "/tmp/tmp.pukQ2WBmfy" 00:42:44.177 } 00:42:44.177 }, 00:42:44.177 { 00:42:44.177 "method": "keyring_file_add_key", 00:42:44.177 "params": { 00:42:44.177 "name": "key1", 00:42:44.177 "path": "/tmp/tmp.zkvJyGxHto" 00:42:44.177 } 00:42:44.177 } 00:42:44.177 ] 00:42:44.177 }, 00:42:44.177 { 00:42:44.177 "subsystem": "iobuf", 00:42:44.177 "config": [ 00:42:44.177 { 00:42:44.177 "method": "iobuf_set_options", 00:42:44.177 "params": { 00:42:44.177 "small_pool_count": 8192, 00:42:44.177 "large_pool_count": 1024, 00:42:44.177 "small_bufsize": 8192, 00:42:44.177 "large_bufsize": 135168, 00:42:44.177 "enable_numa": false 00:42:44.177 } 00:42:44.177 } 00:42:44.177 ] 00:42:44.177 }, 00:42:44.177 { 00:42:44.177 "subsystem": "sock", 00:42:44.177 "config": [ 00:42:44.177 { 00:42:44.177 "method": "sock_set_default_impl", 00:42:44.177 "params": { 00:42:44.177 "impl_name": "posix" 00:42:44.177 } 00:42:44.177 }, 00:42:44.177 { 00:42:44.177 "method": "sock_impl_set_options", 00:42:44.177 "params": { 00:42:44.177 "impl_name": "ssl", 00:42:44.177 "recv_buf_size": 4096, 00:42:44.177 "send_buf_size": 4096, 00:42:44.177 "enable_recv_pipe": true, 00:42:44.177 "enable_quickack": false, 00:42:44.177 "enable_placement_id": 0, 00:42:44.177 "enable_zerocopy_send_server": true, 00:42:44.177 "enable_zerocopy_send_client": false, 00:42:44.177 "zerocopy_threshold": 0, 00:42:44.177 "tls_version": 0, 00:42:44.177 "enable_ktls": false 00:42:44.177 } 00:42:44.177 }, 00:42:44.177 { 00:42:44.177 "method": "sock_impl_set_options", 00:42:44.177 "params": { 00:42:44.177 "impl_name": "posix", 00:42:44.177 "recv_buf_size": 2097152, 00:42:44.177 "send_buf_size": 2097152, 00:42:44.177 "enable_recv_pipe": true, 00:42:44.177 "enable_quickack": false, 00:42:44.178 "enable_placement_id": 0, 00:42:44.178 "enable_zerocopy_send_server": true, 00:42:44.178 "enable_zerocopy_send_client": false, 00:42:44.178 "zerocopy_threshold": 0, 00:42:44.178 "tls_version": 0, 00:42:44.178 "enable_ktls": false 00:42:44.178 } 00:42:44.178 } 00:42:44.178 ] 00:42:44.178 }, 00:42:44.178 { 00:42:44.178 "subsystem": "vmd", 00:42:44.178 "config": [] 00:42:44.178 }, 00:42:44.178 { 00:42:44.178 "subsystem": "accel", 00:42:44.178 "config": [ 00:42:44.178 { 00:42:44.178 "method": "accel_set_options", 00:42:44.178 "params": { 00:42:44.178 "small_cache_size": 128, 00:42:44.178 "large_cache_size": 16, 00:42:44.178 "task_count": 2048, 00:42:44.178 "sequence_count": 2048, 00:42:44.178 "buf_count": 2048 00:42:44.178 } 00:42:44.178 } 00:42:44.178 ] 00:42:44.178 }, 00:42:44.178 { 00:42:44.178 "subsystem": "bdev", 00:42:44.178 "config": [ 00:42:44.178 { 00:42:44.178 "method": "bdev_set_options", 00:42:44.178 "params": { 00:42:44.178 "bdev_io_pool_size": 65535, 00:42:44.178 "bdev_io_cache_size": 256, 00:42:44.178 "bdev_auto_examine": true, 00:42:44.178 "iobuf_small_cache_size": 128, 00:42:44.178 "iobuf_large_cache_size": 16 00:42:44.178 } 00:42:44.178 }, 00:42:44.178 { 00:42:44.178 "method": "bdev_raid_set_options", 00:42:44.178 "params": { 00:42:44.178 "process_window_size_kb": 1024, 00:42:44.178 "process_max_bandwidth_mb_sec": 0 00:42:44.178 } 00:42:44.178 }, 00:42:44.178 { 00:42:44.178 "method": "bdev_iscsi_set_options", 00:42:44.178 "params": { 00:42:44.178 "timeout_sec": 30 00:42:44.178 } 00:42:44.178 }, 00:42:44.178 { 00:42:44.178 "method": "bdev_nvme_set_options", 00:42:44.178 "params": { 00:42:44.178 "action_on_timeout": "none", 00:42:44.178 "timeout_us": 0, 00:42:44.178 "timeout_admin_us": 0, 00:42:44.178 "keep_alive_timeout_ms": 10000, 00:42:44.178 "arbitration_burst": 0, 00:42:44.178 "low_priority_weight": 0, 00:42:44.178 "medium_priority_weight": 0, 00:42:44.178 "high_priority_weight": 0, 00:42:44.178 "nvme_adminq_poll_period_us": 10000, 00:42:44.178 "nvme_ioq_poll_period_us": 0, 00:42:44.178 "io_queue_requests": 512, 00:42:44.178 "delay_cmd_submit": true, 00:42:44.178 "transport_retry_count": 4, 00:42:44.178 "bdev_retry_count": 3, 00:42:44.178 "transport_ack_timeout": 0, 00:42:44.178 "ctrlr_loss_timeout_sec": 0, 00:42:44.178 "reconnect_delay_sec": 0, 00:42:44.178 "fast_io_fail_timeout_sec": 0, 00:42:44.178 "disable_auto_failback": false, 00:42:44.178 "generate_uuids": false, 00:42:44.178 "transport_tos": 0, 00:42:44.178 "nvme_error_stat": false, 00:42:44.178 "rdma_srq_size": 0, 00:42:44.178 "io_path_stat": false, 00:42:44.178 "allow_accel_sequence": false, 00:42:44.178 "rdma_max_cq_size": 0, 00:42:44.178 "rdma_cm_event_timeout_ms": 0, 00:42:44.178 "dhchap_digests": [ 00:42:44.178 "sha256", 00:42:44.178 "sha384", 00:42:44.178 "sha512" 00:42:44.178 ], 00:42:44.178 "dhchap_dhgroups": [ 00:42:44.178 "null", 00:42:44.178 "ffdhe2048", 00:42:44.178 "ffdhe3072", 00:42:44.178 "ffdhe4096", 00:42:44.178 "ffdhe6144", 00:42:44.178 "ffdhe8192" 00:42:44.178 ] 00:42:44.178 } 00:42:44.178 }, 00:42:44.178 { 00:42:44.178 "method": "bdev_nvme_attach_controller", 00:42:44.178 "params": { 00:42:44.178 "name": "nvme0", 00:42:44.178 "trtype": "TCP", 00:42:44.178 "adrfam": "IPv4", 00:42:44.178 "traddr": "127.0.0.1", 00:42:44.178 "trsvcid": "4420", 00:42:44.178 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:44.178 "prchk_reftag": false, 00:42:44.178 "prchk_guard": false, 00:42:44.178 "ctrlr_loss_timeout_sec": 0, 00:42:44.178 "reconnect_delay_sec": 0, 00:42:44.178 "fast_io_fail_timeout_sec": 0, 00:42:44.178 "psk": "key0", 00:42:44.178 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:44.178 "hdgst": false, 00:42:44.178 "ddgst": false, 00:42:44.178 "multipath": "multipath" 00:42:44.178 } 00:42:44.178 }, 00:42:44.178 { 00:42:44.178 "method": "bdev_nvme_set_hotplug", 00:42:44.178 "params": { 00:42:44.178 "period_us": 100000, 00:42:44.178 "enable": false 00:42:44.178 } 00:42:44.178 }, 00:42:44.178 { 00:42:44.178 "method": "bdev_wait_for_examine" 00:42:44.178 } 00:42:44.178 ] 00:42:44.178 }, 00:42:44.178 { 00:42:44.178 "subsystem": "nbd", 00:42:44.178 "config": [] 00:42:44.178 } 00:42:44.178 ] 00:42:44.178 }' 00:42:44.178 15:38:31 keyring_file -- keyring/file.sh@115 -- # killprocess 3404475 00:42:44.178 15:38:31 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3404475 ']' 00:42:44.178 15:38:31 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3404475 00:42:44.178 15:38:31 keyring_file -- common/autotest_common.sh@955 -- # uname 00:42:44.438 15:38:31 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:44.438 15:38:31 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3404475 00:42:44.438 15:38:31 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:44.438 15:38:31 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:44.438 15:38:31 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3404475' 00:42:44.438 killing process with pid 3404475 00:42:44.438 15:38:31 keyring_file -- common/autotest_common.sh@969 -- # kill 3404475 00:42:44.438 Received shutdown signal, test time was about 1.000000 seconds 00:42:44.438 00:42:44.438 Latency(us) 00:42:44.438 [2024-10-28T14:38:31.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:44.438 [2024-10-28T14:38:31.305Z] =================================================================================================================== 00:42:44.438 [2024-10-28T14:38:31.305Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:44.438 15:38:31 keyring_file -- common/autotest_common.sh@974 -- # wait 3404475 00:42:44.699 15:38:31 keyring_file -- keyring/file.sh@118 -- # bperfpid=3407003 00:42:44.699 15:38:31 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3407003 /var/tmp/bperf.sock 00:42:44.699 15:38:31 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3407003 ']' 00:42:44.699 15:38:31 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:44.699 15:38:31 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:44.699 15:38:31 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:42:44.699 15:38:31 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:44.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:44.699 15:38:31 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:44.699 15:38:31 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:42:44.699 "subsystems": [ 00:42:44.699 { 00:42:44.699 "subsystem": "keyring", 00:42:44.699 "config": [ 00:42:44.699 { 00:42:44.699 "method": "keyring_file_add_key", 00:42:44.699 "params": { 00:42:44.699 "name": "key0", 00:42:44.699 "path": "/tmp/tmp.pukQ2WBmfy" 00:42:44.699 } 00:42:44.699 }, 00:42:44.699 { 00:42:44.699 "method": "keyring_file_add_key", 00:42:44.699 "params": { 00:42:44.699 "name": "key1", 00:42:44.699 "path": "/tmp/tmp.zkvJyGxHto" 00:42:44.699 } 00:42:44.699 } 00:42:44.699 ] 00:42:44.699 }, 00:42:44.699 { 00:42:44.699 "subsystem": "iobuf", 00:42:44.699 "config": [ 00:42:44.699 { 00:42:44.699 "method": "iobuf_set_options", 00:42:44.699 "params": { 00:42:44.699 "small_pool_count": 8192, 00:42:44.699 "large_pool_count": 1024, 00:42:44.699 "small_bufsize": 8192, 00:42:44.699 "large_bufsize": 135168, 00:42:44.699 "enable_numa": false 00:42:44.699 } 00:42:44.699 } 00:42:44.699 ] 00:42:44.699 }, 00:42:44.699 { 00:42:44.699 "subsystem": "sock", 00:42:44.699 "config": [ 00:42:44.699 { 00:42:44.699 "method": "sock_set_default_impl", 00:42:44.699 "params": { 00:42:44.699 "impl_name": "posix" 00:42:44.699 } 00:42:44.699 }, 00:42:44.699 { 00:42:44.699 "method": "sock_impl_set_options", 00:42:44.699 "params": { 00:42:44.699 "impl_name": "ssl", 00:42:44.699 "recv_buf_size": 4096, 00:42:44.699 "send_buf_size": 4096, 00:42:44.699 "enable_recv_pipe": true, 00:42:44.699 "enable_quickack": false, 00:42:44.699 "enable_placement_id": 0, 00:42:44.699 "enable_zerocopy_send_server": true, 00:42:44.699 "enable_zerocopy_send_client": false, 00:42:44.699 "zerocopy_threshold": 0, 00:42:44.699 "tls_version": 0, 00:42:44.699 "enable_ktls": false 00:42:44.699 } 00:42:44.699 }, 00:42:44.699 { 00:42:44.699 "method": "sock_impl_set_options", 00:42:44.699 "params": { 00:42:44.699 "impl_name": "posix", 00:42:44.699 "recv_buf_size": 2097152, 00:42:44.699 "send_buf_size": 2097152, 00:42:44.699 "enable_recv_pipe": true, 00:42:44.699 "enable_quickack": false, 00:42:44.699 "enable_placement_id": 0, 00:42:44.699 "enable_zerocopy_send_server": true, 00:42:44.699 "enable_zerocopy_send_client": false, 00:42:44.699 "zerocopy_threshold": 0, 00:42:44.699 "tls_version": 0, 00:42:44.699 "enable_ktls": false 00:42:44.699 } 00:42:44.699 } 00:42:44.699 ] 00:42:44.699 }, 00:42:44.699 { 00:42:44.699 "subsystem": "vmd", 00:42:44.699 "config": [] 00:42:44.699 }, 00:42:44.699 { 00:42:44.699 "subsystem": "accel", 00:42:44.699 "config": [ 00:42:44.699 { 00:42:44.699 "method": "accel_set_options", 00:42:44.699 "params": { 00:42:44.699 "small_cache_size": 128, 00:42:44.699 "large_cache_size": 16, 00:42:44.699 "task_count": 2048, 00:42:44.699 "sequence_count": 2048, 00:42:44.699 "buf_count": 2048 00:42:44.699 } 00:42:44.699 } 00:42:44.699 ] 00:42:44.699 }, 00:42:44.699 { 00:42:44.699 "subsystem": "bdev", 00:42:44.699 "config": [ 00:42:44.699 { 00:42:44.699 "method": "bdev_set_options", 00:42:44.699 "params": { 00:42:44.699 "bdev_io_pool_size": 65535, 00:42:44.699 "bdev_io_cache_size": 256, 00:42:44.699 "bdev_auto_examine": true, 00:42:44.699 "iobuf_small_cache_size": 128, 00:42:44.699 "iobuf_large_cache_size": 16 00:42:44.699 } 00:42:44.699 }, 00:42:44.699 { 00:42:44.699 "method": "bdev_raid_set_options", 00:42:44.699 "params": { 00:42:44.699 "process_window_size_kb": 1024, 00:42:44.699 "process_max_bandwidth_mb_sec": 0 00:42:44.699 } 00:42:44.699 }, 00:42:44.699 { 00:42:44.699 "method": "bdev_iscsi_set_options", 00:42:44.699 "params": { 00:42:44.699 "timeout_sec": 30 00:42:44.699 } 00:42:44.699 }, 00:42:44.699 { 00:42:44.699 "method": "bdev_nvme_set_options", 00:42:44.699 "params": { 00:42:44.699 "action_on_timeout": "none", 00:42:44.699 "timeout_us": 0, 00:42:44.699 "timeout_admin_us": 0, 00:42:44.699 "keep_alive_timeout_ms": 10000, 00:42:44.699 "arbitration_burst": 0, 00:42:44.699 "low_priority_weight": 0, 00:42:44.699 "medium_priority_weight": 0, 00:42:44.699 "high_priority_weight": 0, 00:42:44.699 "nvme_adminq_poll_period_us": 10000, 00:42:44.699 "nvme_ioq_poll_period_us": 0, 00:42:44.699 "io_queue_requests": 512, 00:42:44.699 "delay_cmd_submit": true, 00:42:44.699 "transport_retry_count": 4, 00:42:44.699 "bdev_retry_count": 3, 00:42:44.699 "transport_ack_timeout": 0, 00:42:44.699 "ctrlr_loss_timeout_sec": 0, 00:42:44.699 "reconnect_delay_sec": 0, 00:42:44.699 "fast_io_fail_timeout_sec": 0, 00:42:44.699 "disable_auto_failback": false, 00:42:44.699 "generate_uuids": false, 00:42:44.699 "transport_tos": 0, 00:42:44.699 "nvme_error_stat": false, 00:42:44.699 "rdma_srq_size": 0, 00:42:44.699 "io_path_stat": false, 00:42:44.699 "allow_accel_sequence": false, 00:42:44.699 "rdma_max_cq_size": 0, 00:42:44.699 "rdma_cm_event_timeout_ms": 0, 00:42:44.699 "dhchap_digests": [ 00:42:44.699 "sha256", 00:42:44.699 "sha384", 00:42:44.699 "sha512" 00:42:44.699 ], 00:42:44.699 "dhchap_dhgroups": [ 00:42:44.699 "null", 00:42:44.699 "ffdhe2048", 00:42:44.699 "ffdhe3072", 00:42:44.699 "ffdhe4096", 00:42:44.699 "ffdhe6144", 00:42:44.699 "ffdhe8192" 00:42:44.699 ] 00:42:44.699 } 00:42:44.699 }, 00:42:44.699 { 00:42:44.699 "method": "bdev_nvme_attach_controller", 00:42:44.699 "params": { 00:42:44.699 "name": "nvme0", 00:42:44.699 "trtype": "TCP", 00:42:44.699 "adrfam": "IPv4", 00:42:44.699 "traddr": "127.0.0.1", 00:42:44.699 "trsvcid": "4420", 00:42:44.699 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:44.699 "prchk_reftag": false, 00:42:44.699 "prchk_guard": false, 00:42:44.699 "ctrlr_loss_timeout_sec": 0, 00:42:44.699 "reconnect_delay_sec": 0, 00:42:44.699 "fast_io_fail_timeout_sec": 0, 00:42:44.699 "psk": "key0", 00:42:44.699 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:44.699 "hdgst": false, 00:42:44.699 "ddgst": false, 00:42:44.699 "multipath": "multipath" 00:42:44.699 } 00:42:44.699 }, 00:42:44.699 { 00:42:44.699 "method": "bdev_nvme_set_hotplug", 00:42:44.699 "params": { 00:42:44.699 "period_us": 100000, 00:42:44.699 "enable": false 00:42:44.699 } 00:42:44.699 }, 00:42:44.699 { 00:42:44.699 "method": "bdev_wait_for_examine" 00:42:44.699 } 00:42:44.699 ] 00:42:44.699 }, 00:42:44.699 { 00:42:44.699 "subsystem": "nbd", 00:42:44.699 "config": [] 00:42:44.699 } 00:42:44.699 ] 00:42:44.699 }' 00:42:44.699 15:38:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:44.699 [2024-10-28 15:38:31.509354] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:42:44.700 [2024-10-28 15:38:31.509528] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3407003 ] 00:42:44.960 [2024-10-28 15:38:31.664629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:44.960 [2024-10-28 15:38:31.780824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:45.220 [2024-10-28 15:38:32.046035] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:45.480 15:38:32 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:45.480 15:38:32 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:42:45.480 15:38:32 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:42:45.480 15:38:32 keyring_file -- keyring/file.sh@121 -- # jq length 00:42:45.480 15:38:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:46.049 15:38:32 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:42:46.050 15:38:32 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:42:46.050 15:38:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:46.050 15:38:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:46.050 15:38:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:46.050 15:38:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:46.050 15:38:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:46.310 15:38:32 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:42:46.310 15:38:32 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:42:46.310 15:38:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:46.310 15:38:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:46.310 15:38:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:46.310 15:38:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:46.310 15:38:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:46.570 15:38:33 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:42:46.570 15:38:33 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:42:46.570 15:38:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:42:46.570 15:38:33 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:42:46.829 15:38:33 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:42:46.829 15:38:33 keyring_file -- keyring/file.sh@1 -- # cleanup 00:42:46.830 15:38:33 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.pukQ2WBmfy /tmp/tmp.zkvJyGxHto 00:42:46.830 15:38:33 keyring_file -- keyring/file.sh@20 -- # killprocess 3407003 00:42:46.830 15:38:33 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3407003 ']' 00:42:46.830 15:38:33 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3407003 00:42:46.830 15:38:33 keyring_file -- common/autotest_common.sh@955 -- # uname 00:42:46.830 15:38:33 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:46.830 15:38:33 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3407003 00:42:47.090 15:38:33 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:47.090 15:38:33 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:47.090 15:38:33 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3407003' 00:42:47.090 killing process with pid 3407003 00:42:47.090 15:38:33 keyring_file -- common/autotest_common.sh@969 -- # kill 3407003 00:42:47.090 Received shutdown signal, test time was about 1.000000 seconds 00:42:47.090 00:42:47.090 Latency(us) 00:42:47.090 [2024-10-28T14:38:33.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:47.090 [2024-10-28T14:38:33.957Z] =================================================================================================================== 00:42:47.090 [2024-10-28T14:38:33.957Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:47.090 15:38:33 keyring_file -- common/autotest_common.sh@974 -- # wait 3407003 00:42:47.350 15:38:34 keyring_file -- keyring/file.sh@21 -- # killprocess 3404342 00:42:47.350 15:38:34 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3404342 ']' 00:42:47.350 15:38:34 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3404342 00:42:47.350 15:38:34 keyring_file -- common/autotest_common.sh@955 -- # uname 00:42:47.350 15:38:34 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:47.350 15:38:34 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3404342 00:42:47.350 15:38:34 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:47.350 15:38:34 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:47.350 15:38:34 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3404342' 00:42:47.350 killing process with pid 3404342 00:42:47.350 15:38:34 keyring_file -- common/autotest_common.sh@969 -- # kill 3404342 00:42:47.350 15:38:34 keyring_file -- common/autotest_common.sh@974 -- # wait 3404342 00:42:47.917 00:42:47.917 real 0m26.075s 00:42:47.917 user 1m7.479s 00:42:47.917 sys 0m5.202s 00:42:47.917 15:38:34 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:47.917 15:38:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:47.917 ************************************ 00:42:47.917 END TEST keyring_file 00:42:47.917 ************************************ 00:42:48.178 15:38:34 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:42:48.178 15:38:34 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:48.178 15:38:34 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:42:48.178 15:38:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:48.178 15:38:34 -- common/autotest_common.sh@10 -- # set +x 00:42:48.178 ************************************ 00:42:48.178 START TEST keyring_linux 00:42:48.178 ************************************ 00:42:48.178 15:38:34 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:48.178 Joined session keyring: 714404967 00:42:48.178 * Looking for test storage... 00:42:48.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:48.178 15:38:34 keyring_linux -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:42:48.178 15:38:34 keyring_linux -- common/autotest_common.sh@1689 -- # lcov --version 00:42:48.178 15:38:34 keyring_linux -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:42:48.178 15:38:35 keyring_linux -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:42:48.178 15:38:35 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:48.178 15:38:35 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:48.178 15:38:35 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:48.178 15:38:35 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:42:48.178 15:38:35 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:42:48.178 15:38:35 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:42:48.178 15:38:35 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:42:48.178 15:38:35 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:42:48.178 15:38:35 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:42:48.178 15:38:35 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:42:48.178 15:38:35 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:48.178 15:38:35 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:42:48.178 15:38:35 keyring_linux -- scripts/common.sh@345 -- # : 1 00:42:48.178 15:38:35 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:48.178 15:38:35 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:48.178 15:38:35 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:42:48.178 15:38:35 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:42:48.178 15:38:35 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:48.178 15:38:35 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:42:48.178 15:38:35 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:42:48.178 15:38:35 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:42:48.178 15:38:35 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:42:48.178 15:38:35 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:48.178 15:38:35 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:42:48.178 15:38:35 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:42:48.178 15:38:35 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:48.178 15:38:35 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:48.178 15:38:35 keyring_linux -- scripts/common.sh@368 -- # return 0 00:42:48.178 15:38:35 keyring_linux -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:48.178 15:38:35 keyring_linux -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:42:48.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:48.178 --rc genhtml_branch_coverage=1 00:42:48.178 --rc genhtml_function_coverage=1 00:42:48.178 --rc genhtml_legend=1 00:42:48.178 --rc geninfo_all_blocks=1 00:42:48.178 --rc geninfo_unexecuted_blocks=1 00:42:48.178 00:42:48.178 ' 00:42:48.178 15:38:35 keyring_linux -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:42:48.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:48.178 --rc genhtml_branch_coverage=1 00:42:48.178 --rc genhtml_function_coverage=1 00:42:48.178 --rc genhtml_legend=1 00:42:48.178 --rc geninfo_all_blocks=1 00:42:48.178 --rc geninfo_unexecuted_blocks=1 00:42:48.178 00:42:48.178 ' 00:42:48.178 15:38:35 keyring_linux -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:42:48.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:48.178 --rc genhtml_branch_coverage=1 00:42:48.178 --rc genhtml_function_coverage=1 00:42:48.178 --rc genhtml_legend=1 00:42:48.178 --rc geninfo_all_blocks=1 00:42:48.178 --rc geninfo_unexecuted_blocks=1 00:42:48.178 00:42:48.178 ' 00:42:48.178 15:38:35 keyring_linux -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:42:48.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:48.178 --rc genhtml_branch_coverage=1 00:42:48.178 --rc genhtml_function_coverage=1 00:42:48.178 --rc genhtml_legend=1 00:42:48.178 --rc geninfo_all_blocks=1 00:42:48.178 --rc geninfo_unexecuted_blocks=1 00:42:48.178 00:42:48.178 ' 00:42:48.178 15:38:35 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:48.178 15:38:35 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:48.178 15:38:35 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:42:48.178 15:38:35 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:48.178 15:38:35 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:48.178 15:38:35 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:48.178 15:38:35 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:48.178 15:38:35 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:48.178 15:38:35 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:48.178 15:38:35 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:48.178 15:38:35 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:48.178 15:38:35 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:48.179 15:38:35 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:48.179 15:38:35 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:42:48.179 15:38:35 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:42:48.179 15:38:35 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:48.179 15:38:35 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:48.179 15:38:35 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:48.179 15:38:35 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:48.179 15:38:35 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:48.179 15:38:35 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:42:48.179 15:38:35 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:48.179 15:38:35 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:48.179 15:38:35 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:48.179 15:38:35 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:48.179 15:38:35 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:48.179 15:38:35 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:48.179 15:38:35 keyring_linux -- paths/export.sh@5 -- # export PATH 00:42:48.179 15:38:35 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:48.179 15:38:35 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:42:48.179 15:38:35 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:48.179 15:38:35 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:48.179 15:38:35 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:48.179 15:38:35 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:48.179 15:38:35 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:48.179 15:38:35 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:48.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:48.179 15:38:35 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:48.179 15:38:35 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:48.179 15:38:35 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:48.179 15:38:35 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:48.440 15:38:35 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:48.440 15:38:35 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:48.440 15:38:35 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:42:48.440 15:38:35 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:42:48.440 15:38:35 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:42:48.440 15:38:35 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:42:48.440 15:38:35 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:48.440 15:38:35 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:42:48.440 15:38:35 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:48.440 15:38:35 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:48.440 15:38:35 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:42:48.440 15:38:35 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:48.440 15:38:35 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:48.440 15:38:35 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:42:48.440 15:38:35 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:48.440 15:38:35 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:48.440 15:38:35 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:42:48.440 15:38:35 keyring_linux -- nvmf/common.sh@733 -- # python - 00:42:48.440 15:38:35 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:42:48.440 15:38:35 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:42:48.440 /tmp/:spdk-test:key0 00:42:48.440 15:38:35 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:42:48.440 15:38:35 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:48.440 15:38:35 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:42:48.440 15:38:35 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:48.440 15:38:35 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:48.440 15:38:35 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:42:48.440 15:38:35 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:48.440 15:38:35 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:48.440 15:38:35 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:42:48.440 15:38:35 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:48.440 15:38:35 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:42:48.440 15:38:35 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:42:48.440 15:38:35 keyring_linux -- nvmf/common.sh@733 -- # python - 00:42:48.440 15:38:35 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:42:48.440 15:38:35 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:42:48.440 /tmp/:spdk-test:key1 00:42:48.440 15:38:35 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3407498 00:42:48.440 15:38:35 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:48.440 15:38:35 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3407498 00:42:48.440 15:38:35 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3407498 ']' 00:42:48.440 15:38:35 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:48.440 15:38:35 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:48.440 15:38:35 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:48.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:48.440 15:38:35 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:48.440 15:38:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:48.440 [2024-10-28 15:38:35.254677] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:42:48.440 [2024-10-28 15:38:35.254817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3407498 ] 00:42:48.701 [2024-10-28 15:38:35.421808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:48.701 [2024-10-28 15:38:35.547338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:49.273 15:38:36 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:49.273 15:38:36 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:42:49.273 15:38:36 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:42:49.273 15:38:36 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:49.273 15:38:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:49.273 [2024-10-28 15:38:36.046764] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:49.273 null0 00:42:49.273 [2024-10-28 15:38:36.080834] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:49.273 [2024-10-28 15:38:36.081804] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:49.273 15:38:36 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:49.273 15:38:36 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:42:49.273 988780376 00:42:49.273 15:38:36 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:42:49.273 872601169 00:42:49.273 15:38:36 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3407630 00:42:49.273 15:38:36 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:42:49.273 15:38:36 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3407630 /var/tmp/bperf.sock 00:42:49.273 15:38:36 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3407630 ']' 00:42:49.273 15:38:36 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:49.273 15:38:36 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:49.273 15:38:36 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:49.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:49.273 15:38:36 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:49.273 15:38:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:49.533 [2024-10-28 15:38:36.209978] Starting SPDK v25.01-pre git sha1 45379ed84 / DPDK 24.03.0 initialization... 00:42:49.533 [2024-10-28 15:38:36.210137] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3407630 ] 00:42:49.533 [2024-10-28 15:38:36.382234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:49.793 [2024-10-28 15:38:36.498874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:50.734 15:38:37 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:50.734 15:38:37 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:42:50.734 15:38:37 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:42:50.734 15:38:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:42:50.994 15:38:37 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:42:50.994 15:38:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:51.565 15:38:38 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:51.565 15:38:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:52.135 [2024-10-28 15:38:38.905580] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:52.135 nvme0n1 00:42:52.395 15:38:39 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:42:52.395 15:38:39 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:42:52.395 15:38:39 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:52.395 15:38:39 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:52.395 15:38:39 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:52.395 15:38:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:52.655 15:38:39 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:42:52.655 15:38:39 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:52.655 15:38:39 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:42:52.655 15:38:39 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:42:52.914 15:38:39 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:52.914 15:38:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:52.914 15:38:39 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:42:53.173 15:38:39 keyring_linux -- keyring/linux.sh@25 -- # sn=988780376 00:42:53.173 15:38:39 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:42:53.173 15:38:39 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:53.173 15:38:39 keyring_linux -- keyring/linux.sh@26 -- # [[ 988780376 == \9\8\8\7\8\0\3\7\6 ]] 00:42:53.173 15:38:39 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 988780376 00:42:53.173 15:38:39 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:42:53.173 15:38:39 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:53.173 Running I/O for 1 seconds... 00:42:54.550 4533.00 IOPS, 17.71 MiB/s 00:42:54.550 Latency(us) 00:42:54.550 [2024-10-28T14:38:41.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:54.550 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:42:54.550 nvme0n1 : 1.03 4532.31 17.70 0.00 0.00 27821.80 6505.05 33204.91 00:42:54.550 [2024-10-28T14:38:41.417Z] =================================================================================================================== 00:42:54.550 [2024-10-28T14:38:41.417Z] Total : 4532.31 17.70 0.00 0.00 27821.80 6505.05 33204.91 00:42:54.550 { 00:42:54.550 "results": [ 00:42:54.550 { 00:42:54.550 "job": "nvme0n1", 00:42:54.550 "core_mask": "0x2", 00:42:54.550 "workload": "randread", 00:42:54.550 "status": "finished", 00:42:54.550 "queue_depth": 128, 00:42:54.550 "io_size": 4096, 00:42:54.550 "runtime": 1.028615, 00:42:54.550 "iops": 4532.308006396951, 00:42:54.550 "mibps": 17.70432814998809, 00:42:54.550 "io_failed": 0, 00:42:54.550 "io_timeout": 0, 00:42:54.550 "avg_latency_us": 27821.80064000508, 00:42:54.550 "min_latency_us": 6505.054814814815, 00:42:54.550 "max_latency_us": 33204.90666666667 00:42:54.550 } 00:42:54.550 ], 00:42:54.550 "core_count": 1 00:42:54.550 } 00:42:54.550 15:38:41 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:54.551 15:38:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:54.551 15:38:41 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:42:54.551 15:38:41 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:42:54.551 15:38:41 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:54.808 15:38:41 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:54.808 15:38:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:54.808 15:38:41 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:55.067 15:38:41 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:42:55.067 15:38:41 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:55.067 15:38:41 keyring_linux -- keyring/linux.sh@23 -- # return 00:42:55.067 15:38:41 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:55.067 15:38:41 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:42:55.067 15:38:41 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:55.067 15:38:41 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:55.067 15:38:41 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:55.067 15:38:41 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:55.067 15:38:41 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:55.067 15:38:41 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:55.067 15:38:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:55.325 [2024-10-28 15:38:42.160910] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:55.325 [2024-10-28 15:38:42.161090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20df6e0 (107): Transport endpoint is not connected 00:42:55.325 [2024-10-28 15:38:42.162072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20df6e0 (9): Bad file descriptor 00:42:55.325 [2024-10-28 15:38:42.163067] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:42:55.325 [2024-10-28 15:38:42.163114] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:55.325 [2024-10-28 15:38:42.163149] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:55.325 [2024-10-28 15:38:42.163188] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:42:55.325 request: 00:42:55.325 { 00:42:55.325 "name": "nvme0", 00:42:55.325 "trtype": "tcp", 00:42:55.325 "traddr": "127.0.0.1", 00:42:55.325 "adrfam": "ipv4", 00:42:55.325 "trsvcid": "4420", 00:42:55.325 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:55.325 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:55.325 "prchk_reftag": false, 00:42:55.325 "prchk_guard": false, 00:42:55.325 "hdgst": false, 00:42:55.325 "ddgst": false, 00:42:55.325 "psk": ":spdk-test:key1", 00:42:55.325 "allow_unrecognized_csi": false, 00:42:55.325 "method": "bdev_nvme_attach_controller", 00:42:55.325 "req_id": 1 00:42:55.325 } 00:42:55.325 Got JSON-RPC error response 00:42:55.325 response: 00:42:55.325 { 00:42:55.325 "code": -5, 00:42:55.325 "message": "Input/output error" 00:42:55.325 } 00:42:55.583 15:38:42 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:42:55.583 15:38:42 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:55.583 15:38:42 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:55.583 15:38:42 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:55.583 15:38:42 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:42:55.583 15:38:42 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:55.583 15:38:42 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:42:55.583 15:38:42 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:42:55.583 15:38:42 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:42:55.583 15:38:42 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:55.583 15:38:42 keyring_linux -- keyring/linux.sh@33 -- # sn=988780376 00:42:55.583 15:38:42 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 988780376 00:42:55.583 1 links removed 00:42:55.583 15:38:42 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:55.583 15:38:42 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:42:55.583 15:38:42 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:42:55.583 15:38:42 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:42:55.583 15:38:42 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:42:55.583 15:38:42 keyring_linux -- keyring/linux.sh@33 -- # sn=872601169 00:42:55.583 15:38:42 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 872601169 00:42:55.583 1 links removed 00:42:55.583 15:38:42 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3407630 00:42:55.583 15:38:42 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3407630 ']' 00:42:55.583 15:38:42 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3407630 00:42:55.583 15:38:42 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:42:55.583 15:38:42 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:55.583 15:38:42 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3407630 00:42:55.583 15:38:42 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:55.583 15:38:42 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:55.583 15:38:42 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3407630' 00:42:55.583 killing process with pid 3407630 00:42:55.583 15:38:42 keyring_linux -- common/autotest_common.sh@969 -- # kill 3407630 00:42:55.583 Received shutdown signal, test time was about 1.000000 seconds 00:42:55.583 00:42:55.583 Latency(us) 00:42:55.583 [2024-10-28T14:38:42.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:55.583 [2024-10-28T14:38:42.450Z] =================================================================================================================== 00:42:55.583 [2024-10-28T14:38:42.450Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:55.583 15:38:42 keyring_linux -- common/autotest_common.sh@974 -- # wait 3407630 00:42:55.841 15:38:42 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3407498 00:42:55.841 15:38:42 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3407498 ']' 00:42:55.841 15:38:42 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3407498 00:42:55.841 15:38:42 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:42:55.841 15:38:42 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:55.841 15:38:42 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3407498 00:42:55.841 15:38:42 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:55.841 15:38:42 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:55.841 15:38:42 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3407498' 00:42:55.841 killing process with pid 3407498 00:42:55.841 15:38:42 keyring_linux -- common/autotest_common.sh@969 -- # kill 3407498 00:42:55.841 15:38:42 keyring_linux -- common/autotest_common.sh@974 -- # wait 3407498 00:42:56.407 00:42:56.407 real 0m8.442s 00:42:56.407 user 0m17.357s 00:42:56.407 sys 0m2.417s 00:42:56.407 15:38:43 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:56.407 15:38:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:56.407 ************************************ 00:42:56.407 END TEST keyring_linux 00:42:56.407 ************************************ 00:42:56.665 15:38:43 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:42:56.665 15:38:43 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:42:56.665 15:38:43 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:42:56.665 15:38:43 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:42:56.665 15:38:43 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:42:56.665 15:38:43 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:42:56.665 15:38:43 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:42:56.665 15:38:43 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:42:56.665 15:38:43 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:42:56.665 15:38:43 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:42:56.665 15:38:43 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:42:56.665 15:38:43 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:42:56.665 15:38:43 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:42:56.665 15:38:43 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:42:56.665 15:38:43 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:42:56.665 15:38:43 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:42:56.665 15:38:43 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:42:56.665 15:38:43 -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:56.665 15:38:43 -- common/autotest_common.sh@10 -- # set +x 00:42:56.665 15:38:43 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:42:56.665 15:38:43 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:42:56.665 15:38:43 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:42:56.665 15:38:43 -- common/autotest_common.sh@10 -- # set +x 00:42:59.199 INFO: APP EXITING 00:42:59.199 INFO: killing all VMs 00:42:59.199 INFO: killing vhost app 00:42:59.199 INFO: EXIT DONE 00:43:00.578 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:43:00.578 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:43:00.578 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:43:00.578 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:43:00.578 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:43:00.578 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:43:00.578 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:43:00.837 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:43:00.837 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:43:00.837 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:43:00.837 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:43:00.837 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:43:00.837 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:43:00.837 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:43:00.837 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:43:00.837 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:43:00.837 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:43:02.744 Cleaning 00:43:02.744 Removing: /var/run/dpdk/spdk0/config 00:43:02.744 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:43:02.744 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:43:02.744 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:43:02.744 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:43:02.744 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:43:02.744 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:43:02.744 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:43:02.744 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:43:02.744 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:43:02.744 Removing: /var/run/dpdk/spdk0/hugepage_info 00:43:02.744 Removing: /var/run/dpdk/spdk1/config 00:43:02.744 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:43:02.744 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:43:02.744 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:43:02.744 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:43:02.744 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:43:02.744 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:43:02.744 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:43:02.744 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:43:02.744 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:43:02.744 Removing: /var/run/dpdk/spdk1/hugepage_info 00:43:02.744 Removing: /var/run/dpdk/spdk2/config 00:43:02.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:43:02.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:43:02.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:43:02.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:43:02.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:43:02.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:43:02.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:43:02.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:43:02.744 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:43:02.744 Removing: /var/run/dpdk/spdk2/hugepage_info 00:43:02.744 Removing: /var/run/dpdk/spdk3/config 00:43:02.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:43:02.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:43:02.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:43:02.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:43:02.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:43:02.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:43:02.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:43:02.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:43:02.744 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:43:02.744 Removing: /var/run/dpdk/spdk3/hugepage_info 00:43:02.744 Removing: /var/run/dpdk/spdk4/config 00:43:02.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:43:02.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:43:02.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:43:02.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:43:02.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:43:02.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:43:02.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:43:02.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:43:02.744 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:43:02.744 Removing: /var/run/dpdk/spdk4/hugepage_info 00:43:02.744 Removing: /dev/shm/bdev_svc_trace.1 00:43:02.744 Removing: /dev/shm/nvmf_trace.0 00:43:02.744 Removing: /dev/shm/spdk_tgt_trace.pid3038496 00:43:02.744 Removing: /var/run/dpdk/spdk0 00:43:02.744 Removing: /var/run/dpdk/spdk1 00:43:02.744 Removing: /var/run/dpdk/spdk2 00:43:02.744 Removing: /var/run/dpdk/spdk3 00:43:02.744 Removing: /var/run/dpdk/spdk4 00:43:02.744 Removing: /var/run/dpdk/spdk_pid3036636 00:43:02.744 Removing: /var/run/dpdk/spdk_pid3037514 00:43:02.744 Removing: /var/run/dpdk/spdk_pid3038496 00:43:02.744 Removing: /var/run/dpdk/spdk_pid3039173 00:43:02.744 Removing: /var/run/dpdk/spdk_pid3039866 00:43:02.744 Removing: /var/run/dpdk/spdk_pid3040009 00:43:02.744 Removing: /var/run/dpdk/spdk_pid3040728 00:43:02.744 Removing: /var/run/dpdk/spdk_pid3040865 00:43:02.744 Removing: /var/run/dpdk/spdk_pid3041131 00:43:02.744 Removing: /var/run/dpdk/spdk_pid3042974 00:43:02.744 Removing: /var/run/dpdk/spdk_pid3044110 00:43:02.744 Removing: /var/run/dpdk/spdk_pid3044492 00:43:02.744 Removing: /var/run/dpdk/spdk_pid3044823 00:43:02.744 Removing: /var/run/dpdk/spdk_pid3045160 00:43:02.744 Removing: /var/run/dpdk/spdk_pid3045481 00:43:02.744 Removing: /var/run/dpdk/spdk_pid3045653 00:43:02.744 Removing: /var/run/dpdk/spdk_pid3045921 00:43:02.744 Removing: /var/run/dpdk/spdk_pid3046115 00:43:02.744 Removing: /var/run/dpdk/spdk_pid3046449 00:43:02.744 Removing: /var/run/dpdk/spdk_pid3049725 00:43:02.744 Removing: /var/run/dpdk/spdk_pid3050027 00:43:02.744 Removing: /var/run/dpdk/spdk_pid3050392 00:43:02.744 Removing: /var/run/dpdk/spdk_pid3050546 00:43:02.744 Removing: /var/run/dpdk/spdk_pid3050984 00:43:02.744 Removing: /var/run/dpdk/spdk_pid3051159 00:43:02.744 Removing: /var/run/dpdk/spdk_pid3052150 00:43:02.744 Removing: /var/run/dpdk/spdk_pid3052197 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3052491 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3052623 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3052788 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3052925 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3053415 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3053583 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3053784 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3056303 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3059275 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3066721 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3067138 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3069798 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3069977 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3073044 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3077418 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3080404 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3088386 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3093913 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3095196 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3095888 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3107358 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3109735 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3139236 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3142667 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3147294 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3152368 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3152370 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3153126 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3154128 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3154827 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3155223 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3155236 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3155487 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3155626 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3155632 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3156281 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3156812 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3157472 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3157863 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3157869 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3158129 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3159420 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3160276 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3165707 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3210941 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3214414 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3215586 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3216914 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3217181 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3217326 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3217589 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3218250 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3219604 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3220926 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3221999 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3223785 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3224242 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3225008 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3227695 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3231197 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3231199 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3231201 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3233506 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3238506 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3241257 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3245165 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3246111 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3247211 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3248306 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3251380 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3253879 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3259033 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3259035 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3262184 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3262343 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3262473 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3262741 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3262746 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3265917 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3266254 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3269195 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3271185 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3275008 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3278740 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3286732 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3291769 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3291776 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3307735 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3308405 00:43:03.014 Removing: /var/run/dpdk/spdk_pid3308959 00:43:03.272 Removing: /var/run/dpdk/spdk_pid3309594 00:43:03.272 Removing: /var/run/dpdk/spdk_pid3310310 00:43:03.272 Removing: /var/run/dpdk/spdk_pid3310852 00:43:03.272 Removing: /var/run/dpdk/spdk_pid3311381 00:43:03.272 Removing: /var/run/dpdk/spdk_pid3311932 00:43:03.272 Removing: /var/run/dpdk/spdk_pid3314688 00:43:03.272 Removing: /var/run/dpdk/spdk_pid3314833 00:43:03.272 Removing: /var/run/dpdk/spdk_pid3318769 00:43:03.272 Removing: /var/run/dpdk/spdk_pid3318941 00:43:03.272 Removing: /var/run/dpdk/spdk_pid3322964 00:43:03.272 Removing: /var/run/dpdk/spdk_pid3325987 00:43:03.272 Removing: /var/run/dpdk/spdk_pid3333271 00:43:03.272 Removing: /var/run/dpdk/spdk_pid3333663 00:43:03.272 Removing: /var/run/dpdk/spdk_pid3336316 00:43:03.272 Removing: /var/run/dpdk/spdk_pid3336457 00:43:03.273 Removing: /var/run/dpdk/spdk_pid3339620 00:43:03.273 Removing: /var/run/dpdk/spdk_pid3343982 00:43:03.273 Removing: /var/run/dpdk/spdk_pid3346915 00:43:03.273 Removing: /var/run/dpdk/spdk_pid3354986 00:43:03.273 Removing: /var/run/dpdk/spdk_pid3360480 00:43:03.273 Removing: /var/run/dpdk/spdk_pid3361779 00:43:03.273 Removing: /var/run/dpdk/spdk_pid3362432 00:43:03.273 Removing: /var/run/dpdk/spdk_pid3373687 00:43:03.273 Removing: /var/run/dpdk/spdk_pid3375975 00:43:03.273 Removing: /var/run/dpdk/spdk_pid3377975 00:43:03.273 Removing: /var/run/dpdk/spdk_pid3383527 00:43:03.273 Removing: /var/run/dpdk/spdk_pid3383535 00:43:03.273 Removing: /var/run/dpdk/spdk_pid3387189 00:43:03.273 Removing: /var/run/dpdk/spdk_pid3388524 00:43:03.273 Removing: /var/run/dpdk/spdk_pid3389945 00:43:03.273 Removing: /var/run/dpdk/spdk_pid3390761 00:43:03.273 Removing: /var/run/dpdk/spdk_pid3392291 00:43:03.273 Removing: /var/run/dpdk/spdk_pid3393171 00:43:03.273 Removing: /var/run/dpdk/spdk_pid3398735 00:43:03.273 Removing: /var/run/dpdk/spdk_pid3399129 00:43:03.273 Removing: /var/run/dpdk/spdk_pid3399518 00:43:03.273 Removing: /var/run/dpdk/spdk_pid3401084 00:43:03.273 Removing: /var/run/dpdk/spdk_pid3401476 00:43:03.273 Removing: /var/run/dpdk/spdk_pid3401754 00:43:03.273 Removing: /var/run/dpdk/spdk_pid3404342 00:43:03.273 Removing: /var/run/dpdk/spdk_pid3404475 00:43:03.273 Removing: /var/run/dpdk/spdk_pid3407003 00:43:03.273 Removing: /var/run/dpdk/spdk_pid3407498 00:43:03.273 Removing: /var/run/dpdk/spdk_pid3407630 00:43:03.273 Clean 00:43:03.273 15:38:50 -- common/autotest_common.sh@1449 -- # return 0 00:43:03.273 15:38:50 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:43:03.273 15:38:50 -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:03.273 15:38:50 -- common/autotest_common.sh@10 -- # set +x 00:43:03.273 15:38:50 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:43:03.273 15:38:50 -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:03.273 15:38:50 -- common/autotest_common.sh@10 -- # set +x 00:43:03.532 15:38:50 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:03.532 15:38:50 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:43:03.532 15:38:50 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:43:03.532 15:38:50 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:43:03.532 15:38:50 -- spdk/autotest.sh@394 -- # hostname 00:43:03.532 15:38:50 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:43:03.791 geninfo: WARNING: invalid characters removed from testname! 00:44:40.317 15:40:14 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:40.317 15:40:20 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:40.317 15:40:24 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:42.851 15:40:29 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:47.038 15:40:33 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:52.307 15:40:38 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:56.499 15:40:42 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:44:56.499 15:40:42 -- common/autotest_common.sh@1688 -- $ [[ y == y ]] 00:44:56.499 15:40:42 -- common/autotest_common.sh@1689 -- $ lcov --version 00:44:56.499 15:40:42 -- common/autotest_common.sh@1689 -- $ awk '{print $NF}' 00:44:56.499 15:40:43 -- common/autotest_common.sh@1689 -- $ lt 1.15 2 00:44:56.499 15:40:43 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:44:56.499 15:40:43 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:44:56.499 15:40:43 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:44:56.499 15:40:43 -- scripts/common.sh@336 -- $ IFS=.-: 00:44:56.499 15:40:43 -- scripts/common.sh@336 -- $ read -ra ver1 00:44:56.499 15:40:43 -- scripts/common.sh@337 -- $ IFS=.-: 00:44:56.499 15:40:43 -- scripts/common.sh@337 -- $ read -ra ver2 00:44:56.499 15:40:43 -- scripts/common.sh@338 -- $ local 'op=<' 00:44:56.499 15:40:43 -- scripts/common.sh@340 -- $ ver1_l=2 00:44:56.499 15:40:43 -- scripts/common.sh@341 -- $ ver2_l=1 00:44:56.499 15:40:43 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:44:56.499 15:40:43 -- scripts/common.sh@344 -- $ case "$op" in 00:44:56.499 15:40:43 -- scripts/common.sh@345 -- $ : 1 00:44:56.499 15:40:43 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:44:56.499 15:40:43 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:56.499 15:40:43 -- scripts/common.sh@365 -- $ decimal 1 00:44:56.499 15:40:43 -- scripts/common.sh@353 -- $ local d=1 00:44:56.499 15:40:43 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:44:56.499 15:40:43 -- scripts/common.sh@355 -- $ echo 1 00:44:56.499 15:40:43 -- scripts/common.sh@365 -- $ ver1[v]=1 00:44:56.499 15:40:43 -- scripts/common.sh@366 -- $ decimal 2 00:44:56.499 15:40:43 -- scripts/common.sh@353 -- $ local d=2 00:44:56.499 15:40:43 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:44:56.499 15:40:43 -- scripts/common.sh@355 -- $ echo 2 00:44:56.499 15:40:43 -- scripts/common.sh@366 -- $ ver2[v]=2 00:44:56.499 15:40:43 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:44:56.499 15:40:43 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:44:56.499 15:40:43 -- scripts/common.sh@368 -- $ return 0 00:44:56.499 15:40:43 -- common/autotest_common.sh@1690 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:56.499 15:40:43 -- common/autotest_common.sh@1702 -- $ export 'LCOV_OPTS= 00:44:56.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:56.499 --rc genhtml_branch_coverage=1 00:44:56.499 --rc genhtml_function_coverage=1 00:44:56.499 --rc genhtml_legend=1 00:44:56.499 --rc geninfo_all_blocks=1 00:44:56.499 --rc geninfo_unexecuted_blocks=1 00:44:56.499 00:44:56.499 ' 00:44:56.499 15:40:43 -- common/autotest_common.sh@1702 -- $ LCOV_OPTS=' 00:44:56.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:56.499 --rc genhtml_branch_coverage=1 00:44:56.499 --rc genhtml_function_coverage=1 00:44:56.499 --rc genhtml_legend=1 00:44:56.499 --rc geninfo_all_blocks=1 00:44:56.499 --rc geninfo_unexecuted_blocks=1 00:44:56.499 00:44:56.499 ' 00:44:56.499 15:40:43 -- common/autotest_common.sh@1703 -- $ export 'LCOV=lcov 00:44:56.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:56.499 --rc genhtml_branch_coverage=1 00:44:56.499 --rc genhtml_function_coverage=1 00:44:56.499 --rc genhtml_legend=1 00:44:56.499 --rc geninfo_all_blocks=1 00:44:56.499 --rc geninfo_unexecuted_blocks=1 00:44:56.499 00:44:56.499 ' 00:44:56.499 15:40:43 -- common/autotest_common.sh@1703 -- $ LCOV='lcov 00:44:56.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:56.499 --rc genhtml_branch_coverage=1 00:44:56.499 --rc genhtml_function_coverage=1 00:44:56.499 --rc genhtml_legend=1 00:44:56.499 --rc geninfo_all_blocks=1 00:44:56.499 --rc geninfo_unexecuted_blocks=1 00:44:56.499 00:44:56.499 ' 00:44:56.499 15:40:43 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:56.499 15:40:43 -- scripts/common.sh@15 -- $ shopt -s extglob 00:44:56.499 15:40:43 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:44:56.499 15:40:43 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:56.499 15:40:43 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:56.499 15:40:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:56.499 15:40:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:56.499 15:40:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:56.499 15:40:43 -- paths/export.sh@5 -- $ export PATH 00:44:56.499 15:40:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:56.499 15:40:43 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:44:56.499 15:40:43 -- common/autobuild_common.sh@486 -- $ date +%s 00:44:56.499 15:40:43 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730126443.XXXXXX 00:44:56.499 15:40:43 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730126443.A2zhAK 00:44:56.499 15:40:43 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:44:56.499 15:40:43 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:44:56.499 15:40:43 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:44:56.499 15:40:43 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:44:56.499 15:40:43 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:44:56.499 15:40:43 -- common/autobuild_common.sh@502 -- $ get_config_params 00:44:56.499 15:40:43 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:44:56.499 15:40:43 -- common/autotest_common.sh@10 -- $ set +x 00:44:56.499 15:40:43 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:44:56.499 15:40:43 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:44:56.499 15:40:43 -- pm/common@17 -- $ local monitor 00:44:56.499 15:40:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:56.499 15:40:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:56.499 15:40:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:56.499 15:40:43 -- pm/common@21 -- $ date +%s 00:44:56.499 15:40:43 -- pm/common@21 -- $ date +%s 00:44:56.499 15:40:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:56.499 15:40:43 -- pm/common@21 -- $ date +%s 00:44:56.499 15:40:43 -- pm/common@25 -- $ sleep 1 00:44:56.499 15:40:43 -- pm/common@21 -- $ date +%s 00:44:56.499 15:40:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1730126443 00:44:56.499 15:40:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1730126443 00:44:56.499 15:40:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1730126443 00:44:56.499 15:40:43 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1730126443 00:44:56.499 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1730126443_collect-vmstat.pm.log 00:44:56.499 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1730126443_collect-cpu-load.pm.log 00:44:56.499 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1730126443_collect-cpu-temp.pm.log 00:44:56.499 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1730126443_collect-bmc-pm.bmc.pm.log 00:44:57.431 15:40:44 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:44:57.431 15:40:44 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:44:57.431 15:40:44 -- spdk/autopackage.sh@14 -- $ timing_finish 00:44:57.431 15:40:44 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:44:57.431 15:40:44 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:44:57.431 15:40:44 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:57.431 15:40:44 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:44:57.431 15:40:44 -- pm/common@29 -- $ signal_monitor_resources TERM 00:44:57.431 15:40:44 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:44:57.431 15:40:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:57.431 15:40:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:44:57.431 15:40:44 -- pm/common@44 -- $ pid=3420917 00:44:57.431 15:40:44 -- pm/common@50 -- $ kill -TERM 3420917 00:44:57.431 15:40:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:57.431 15:40:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:44:57.431 15:40:44 -- pm/common@44 -- $ pid=3420919 00:44:57.431 15:40:44 -- pm/common@50 -- $ kill -TERM 3420919 00:44:57.431 15:40:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:57.431 15:40:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:44:57.431 15:40:44 -- pm/common@44 -- $ pid=3420921 00:44:57.431 15:40:44 -- pm/common@50 -- $ kill -TERM 3420921 00:44:57.431 15:40:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:57.431 15:40:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:44:57.431 15:40:44 -- pm/common@44 -- $ pid=3420946 00:44:57.431 15:40:44 -- pm/common@50 -- $ sudo -E kill -TERM 3420946 00:44:57.688 + [[ -n 2952311 ]] 00:44:57.688 + sudo kill 2952311 00:44:57.696 [Pipeline] } 00:44:57.712 [Pipeline] // stage 00:44:57.718 [Pipeline] } 00:44:57.734 [Pipeline] // timeout 00:44:57.739 [Pipeline] } 00:44:57.755 [Pipeline] // catchError 00:44:57.760 [Pipeline] } 00:44:57.777 [Pipeline] // wrap 00:44:57.781 [Pipeline] } 00:44:57.794 [Pipeline] // catchError 00:44:57.803 [Pipeline] stage 00:44:57.806 [Pipeline] { (Epilogue) 00:44:57.820 [Pipeline] catchError 00:44:57.822 [Pipeline] { 00:44:57.836 [Pipeline] echo 00:44:57.838 Cleanup processes 00:44:57.845 [Pipeline] sh 00:44:58.127 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:58.127 3421090 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:44:58.127 3421231 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:58.139 [Pipeline] sh 00:44:58.421 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:58.421 ++ grep -v 'sudo pgrep' 00:44:58.421 ++ awk '{print $1}' 00:44:58.421 + sudo kill -9 3421090 00:44:58.432 [Pipeline] sh 00:44:58.713 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:45:30.801 [Pipeline] sh 00:45:31.086 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:45:31.344 Artifacts sizes are good 00:45:31.362 [Pipeline] archiveArtifacts 00:45:31.372 Archiving artifacts 00:45:31.543 [Pipeline] sh 00:45:31.855 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:45:31.873 [Pipeline] cleanWs 00:45:31.886 [WS-CLEANUP] Deleting project workspace... 00:45:31.886 [WS-CLEANUP] Deferred wipeout is used... 00:45:31.893 [WS-CLEANUP] done 00:45:31.895 [Pipeline] } 00:45:31.923 [Pipeline] // catchError 00:45:31.961 [Pipeline] sh 00:45:32.247 + logger -p user.info -t JENKINS-CI 00:45:32.257 [Pipeline] } 00:45:32.271 [Pipeline] // stage 00:45:32.276 [Pipeline] } 00:45:32.292 [Pipeline] // node 00:45:32.297 [Pipeline] End of Pipeline 00:45:32.334 Finished: SUCCESS